Continue pipeline after WebSocketServerProtocolHandler when no upgrade specified? - websocket

I have WebSocketServerProtocolHandler handler on the root path, where I also accept regular HTTP requests. However, WebSocketServerProtocolHandler wont let me use my HTTP request, as it assumes everything is web sockets and responds with:
not a WebSocket handshake request: missing upgrade
Can I simply continue execution of the pipeline after WebSocketServerProtocolHandler when upgrade to web sockets is not required? In other words, I need both HTTP and WebSockets operate on same address.
Yeah, I could probably copy/paste and do my own WebSocketServerProtocolHandler, but is there better way?

The javadoc for WebSocketServerProtocolHander refers you to the io.netty.example.http.websocketx.html5.WebSocketServer example. However it might not be entirely obvious what's happening here.
If you take a look at the source code for WebSocketServerInitializer you can see that by default it sets up a fairly standard HTTP pipeline. This is because, as you know, the upgrade request is a HTTP request. The magic happens in the handleHttpRequest method of WebSocketServerHandler. It falls through to line 96 and assumes it's an upgrade request (you might want to actually check). It creates a WebSocketServerHandshaker and starts the handshake. The trick is that it automatically reconfigures the pipeline to handle web socket traffic so you don't have to. Take a look at the handshake method in WebSocketServerHandshaker to see what's going on.

Related

How to use netty to run a websocket server and not process other types HTTP requests (like GET and POST).

I'm using Netty 4.1.1 to run a websocket server. Only want to process websocket and not process other types HTTP requests (like GET and POST).
In Netty API for WebSocketServerProtocolHandler, it says:
"This handler does all the heavy lifting for you to run a websocket server. It takes care of websocket handshaking as well as processing of control frames (Close, Ping, Pong). Text and Binary data frames are passed to the next handler in the pipeline (implemented by you) for processing. See io.netty.example.http.websocketx.html5.WebSocketServer for usage. The implementation of this handler assumes that you just want to run a websocket server and not process other types HTTP requests (like GET and POST)."
But I can't find io.netty.example.http.websocketx.html5.WebSocketServer.
Any idea? Thanks.
It seems that the doc is outdated.
You can refer io.netty.example.http.websocketx.server.WebSocketServer.
If you want to check io.netty.example.http.websocketx.html5.WebSocketServer, you can checkout commit 2704efc.

Making request using WebSockets in sails but not receiving response from the server

I'm starting with Websockets and I have a problem.
I have a sails.js application that uses sockets to update the client side.
On the client side it makes an API call using socket.get("/api/v1/actor...") to bring all the items of the database. When I see what the WebSocket's traffic on the Chrome console:
As you can see, the connection has been established and the API call has been correctly done through the socket.
The problem is, there is no answer from the server, not even an error.
If I make the same API call using ajax, I get response, but it doesn't work using WebSockets.
Any idea what might be producing this behavior?
EDIT: I add here the code here that processes the request and this one here that sends the request, but the problem is that it never execute this code. I think we we are closer to the find the cause, since we think it has to do with a network problem. We figured there is an F5 reverse-proxy which is not properly set up to handle websockets
The answer didn't make any sense now that I've seen the code that's why I've edited it. I only answered because I could't comment on your question and ask you for the code.
Your calling code seems correct and the server side of things the process of response should be handled automatically by the framework, you only need to return some JSON in the controller method.
I instantiated a copy of the server (just changed the adapters to run it locally) and the server replied to the web socket requests (although I only tested the route '/index').
Normally when the problems are caused by a reverse proxy the socket simply refuses to connect and you can't even send data to server. Does the property "socket.socket.connected" returns true?
The best way to test is to write a small node application with socket.io client and test it in the same machine that the application server is running, then you can exclude network problems.

lighttpd/mod_websocket mqtt handshake fail (no subproto)

I have set up lighttpd with mod_websocket as discussed in Dom Bramley's blog entry (except that I am using a BeagleBone Black with Debian Wheezy instead of an rPi.)
https://www.ibm.com/developerworks/community/blogs/B-Fool/entry/setting_up_an_mqtt_websocket_gateway_for_raspberry_pi?lang=en
[During the lighttpd/mod_websocket build process I was asked if I wanted to patch the server and I said yes.]
I have the mosquito MQTT broker running on the same host and publishing on various topics.
When I try to connect to the broker with a browser client via the web socket, I can see that everything works okay in terms of the http upgrade to websocket and forwarding the connection request to mosquitto. Mosquitto gets the connection request and accepts it. However, the response that gets back to the browser does not include the Sec-Websocket-Protocol header echoing the subproto specifier mqttv3.1 that was in the original upgrade request. The client correctly rejects this answer and the connection is shut down.
The javascript error from mqttws31.js:912 is "Sent non-empty Sec-Websocket-Protocol header but no response is received." With Wireshark, I can see that this is true; the 101 Switching Protocols response has headers Upgrade, Connection, and Sec-Websocket-Accept, but nothing else.
My mod_websocket config file defines host, port, type, and subproto the same as Dom's example, and I can see from various debug statements that the request gets all the way to Mosquitto correctly.
Can anyone suggest how to get the Websocket-Protocol header to be included in the response? It must work, Dom wrote a blog post describing how he did it!
I think recent versions of mod_websocket broke/removed subprotocol support, but can't confirm it right now. You could try an earlier version, or use a dedicated websocket to tcp gateway like WSS
https://github.com/stylpen/WSS/
The mod_websocket author (Norio Kobota) quickly and effectively resolved this issue for me by making an update to mod_websocket. The fix is currently in a development branch, and available on github. Our discussion is part of the thread for mod_websocket issue 28.
Briefly, the use case that I have (pre-written client library and existing TCP backend) is much less flexible than a roll-your-own client and server combination with respect to connect-time protocol negotiation. However, in my case I don't really need any flexibility or negotiation with the backend, and so mod_websocket can just echo the configuration it has been given without having to dive into the details of the subprotocol.
The updated mod_websocket echoes the subproto entry from its config file during websocket handshake which satisfies the MQTT client library.
So now I have two solutions for adapters between websocket clients and TCP backends! Thanks all for your help.
Doug Johnson

How to handle different (url) websocket connections in netty

Websocket example in netty (examples) has a http request handler which:
performs hand shaking (at first)
(then) handles different types of WebSocket frames, eventually "TextWebSocketFrame"s.
There is only one url for websocket connections in this example.
The problem is, when TextWebSocketFrame based actual websocket communication starts, there is no direct way to determine websocket url from TextWebSocketFrames themselves (correct me if I am wrong).
So, how to handle different (url) websocket connections in netty?
One solution can be registering channels and their "websocket connection urls" during handshaking process.
The other is having only one websocket connection url and resolving different contexts by adding extra information to websocket messages (TextWebSocketFrames).
I don't find these solutions elegant, so any ideas?
It is my understanding that when you perform a web socket handshake, it is to a specific URL. That is specified in the web socket standard. See RFC 6455. Hence, there is no URL information in the TextWebSocketFrame because the assumption is that the frame will be sent to the URL to which the socket is bound.
To handle different URLs, you will have to either:
Setup a different pipeline and bind to a different IP and/or port for each URL, or
Like you stated, customise the hand shake and store the URL with the channel.
Personally, I've just used JSON in a TextWebSocketFrame. In my JSON, I have a field that states the intended action. This field is used for routing to the appropriate message handler.
I think it comes down to a design decision. WebSockets are intended for long lived connections where a request message can have 0, 1 or > 1 responses. This contrasts the REST style 1 request and 1 responses model.
Hope this helps.
The question "how to handle different (url) websocket connections in netty" does not make sense, I presume that the author meant to ask "how to serve multiple different websocket paths on a single port:host".
The question is valid because the HTTP protocol, (at least version 1.1,) WebSockets, and web browsers all support this scenario:
Client connects to server and the two start exchanging HTTP request/response pairs.
Client sends the HTTP request to upgrade to WebSocket, server honors it, and now a WebSocket is established between client and server.
The original HTTP connection remains open, so client and server can continue exchanging HTTP request/response pairs in parallel to the WebSocket. (In light of this, the term "upgrade" is a misnomer, because the connection is not upgraded at all; instead, a new connection is established for the WebSocket.)
Since the HTTP connection is still available, the client can send another HTTP upgrade request, thus creating another WebSocket. On the client side, it would look like this:
socket1 = new WebSocket( "https://acme.com:8443/alpha" );
socket2 = new WebSocket( "https://acme.com:8443/bravo" );
However, you can't have that, because Netty in all its magnificent glory and terrifying complexity does not exactly support that, and this is true even now, 10 years after the question was asked.
That's because:
Only one ServerBootstrap can bind to a given port on a given host.
(That's how the socket layer works.)
A ServerBootstrap can only have one "Child Handler".
(ServerBootstrap.childHandler() silently fails to report an error if you invoke it twice, but only the last invocation takes effect.)
A ChannelPipeline can only have one WebSocketServerProtocolHandler.
(Only the first WebSocketServerProtocolHandler that you add works, and Netty silently fails to issue an error if you add more.)
A WebSocketServerProtocolHandler accepts one and only one webSocketPath.
So, there you have it, a port:host can only have one webSocketPath, and that's a Netty limitation.
It might be possible to overcome this limitation by rewriting WebSocketServerProtocolHandler, but #aintNoBodyGotNoTimeFoDat.
Luckily, Netty does support another feature which makes it possible to achieve a similar thing. The constructor of WebSocketServerProtocolHandler supports a poorly documented and poorly named checkStartsWith parameter which, if set to true, will cause the handler to honor websocket negotiation requests not only on the given webSocketPath but also for any webSocket path that starts with the given webSocketpath and continues with a '?' or a '/' followed by other stuff. So, the code on the client would then look like this:
socket1 = new WebSocket( "https://acme.com:8443/allWebSocketsHere/alpha" );
socket2 = new WebSocket( "https://acme.com:8443/allWebSocketsHere/bravo" );
If you decide to build your netty server to handle this, the next problem you will face is how to obtain the "/allWebSocketsHere/alpha" and "allWebSocketsHere/bravo" parts. Luckily, someone else has already figured that out, see "Netty: How to use query string with websocket?" https://stackoverflow.com/a/47897963/773113

Any HTTP proxies with explicit, configurable support for request/response buffering and delayed connections?

When dealing with mobile clients it is very common to have multisecond delays during the transmission of HTTP requests. If you are serving pages or services out of a prefork Apache the child processes will be tied up for seconds serving a single mobile client, even if your app server logic is done in 5ms. I am looking for a HTTP server, balancer or proxy server that supports the following:
A request arrives to the proxy. The proxy starts buffering in RAM or in disk the request, including headers and POST/PUT bodies. The proxy DOES NOT open a connection to the backend server. This is probably the most important part.
The proxy server stops buffering the request when:
A size limit has been reached (say, 4KB), or
The request has been received completely, headers and body
Only now, with (part of) the request in memory, a connection is opened to the backend and the request is relayed.
The backend sends back the response. Again the proxy server starts buffering it immediately (up to a more generous size, say 64KB.)
Since the proxy has a big enough buffer the backend response is stored completely in the proxy server in a matter of miliseconds, and the backend process/thread is free to process more requests. The backend connection is immediately closed.
The proxy sends back the response to the mobile client, as fast or as slow as it is capable of, without having a connection to the backend tying up resources.
I am fairly sure you can do 4-6 with Squid, and nginx appears to support 1-3 (and looks like fairly unique in this respect). My question is: is there any proxy server that empathizes these buffering and not-opening-connections-until-ready capabilities? Maybe there is just a bit of Apache config-fu that makes this buffering behaviour trivial? Any of them that it is not a dinosaur like Squid and that supports a lean single-process, asynchronous, event-based execution model?
(Siderant: I would be using nginx but it doesn't support chunked POST bodies, making it useless for serving stuff to mobile clients. Yes cheap 50$ handsets love chunked POSTs... sigh)
What about using both nginx and Squid (client — Squid — nginx — backend)? When returning data from a backend, Squid does convert it from C-T-E: chunked to a regular stream with Content-Length set, so maybe it can normalize POST also.
Nginx can do everything you want. The configuration parameters you are looking for are
http://wiki.codemongers.com/NginxHttpCoreModule#client_body_buffer_size
and
http://wiki.codemongers.com/NginxHttpProxyModule#proxy_buffer_size
Fiddler, a free tool from Telerik, does at least some of the things you're looking for.
Specifically, go to Rules | Custom Rules... and you can add arbitrary Javascript code at all points during the connection. You could simulate some of the things you need with sleep() calls.
I'm not sure this method gives you the fine buffering control you want, however. Still, something might be better than nothing?
Squid 2.7 can support 1-3 with a patch:
http://www.squid-cache.org/Versions/v2/HEAD/changesets/12402.patch
I've tested this and found it to work well, with the proviso that it only buffers to memory, not disk (unless it swaps, of course, and you don't want this), so you need to run it on a box that's appropriately provisioned for your workload.
Chunked POSTs are a problem for most servers and intermediaries. Are you sure you need support? Usually clients should retry the request when they get a 411.
Unfortunately, I'm not aware of a ready-made solution for this. In the worst case scenario, consider developing it yourself, say, using Java NIO -- it shouldn't take more than a week.

Resources