There is currently a function in tornado:
WebSocketHandler.get_compression_options()
Is there a current leading compression method people are using for websockets at the moment? Would Tornado look to adopt these compression methods in the future? How would the clients decompress the received message?
Tornado supports websocket compression according to RFC 7692. To enable compression, return an empty dictionary from get_compression_options() (instead of None, which is the default and disables compression). If compression is enabled on both sides of the connection it will be used automatically; this is transparent to the application.
In the future, it may be possible to return other options in this dictionary (such as a compression_level parameter) to make tradeoffs between the amount of compression and CPU/memory usage, although no such options are currently implemented.
Compression supported in 4.0
WebSocketHandler.get_compression_options()
example code :
class ChatSocketHandler(tornado.websocket.WebSocketHandler):
def get_compression_options(self):
# Non-None enables compression with default options.
return {'compression_level':5,'mem_level':5}
Related
At the moment I am working on some performance measurements in Vulkan. I want to measure the difference between uncompressed formats such as VK_FORMAT_R32_SFLOAT and compressed formats such as VK_FORMAT_BC6H_UFLOAT_BLOCK. Is there a built-in feature in Vulkan that allows switching between formats at runtime?
Since the data is created at runtime, it is unfortunately not an option to compress the data offline. I also know that I could implement the compression myself, but BC6 is so complex that I would like to avoid it if possible.
If Vulkan does not support this feature, is there some C++ lib that I could use instead?
Vulkan does not have built-in on-the-fly image compression. According to a quick Google search, the DirectXTex library seems like it should do what you want.
I am implementing a WebSocket server in C and was wondering what's the purpose of the text/binary frame indicators (opcode 1 and 2). Why they are there? In the end in both cases the payload contains bits. And when there is a protocol using websocket or so then I know what expect in the data. Is it because when it's a text message I can be sure that payload only contains UTF-8 valid data?
I will start my answer by pointing out that WebSockets are often implemented with a Javascript client in mind (i.e., a browser).
When you're using C, the different opcode might be used in different ways, but when using Javascript, this difference controls the type of the data in the event (Blob vs. String).
As you point out in the question, a string is always a valid UTF-8 stream of bytes, whereas a blob isn't.
This affects some data transport schemes (such as JSON parsing, which requires a UTF-8 valid stream).
Obviously, in C, this opcode could be used in different ways, but it would be better to use the opcode in the same manner as a potential javascript client.
P.S.
There are a number of Websocket C libraries and frameworks out there (I'm the author of facil.io).
Unless this is a study project, I would consider using one of the established frameworks / libraries.
I am making a call to an endpoint that returns JSON. When I save the data to a file, the total size is 500 Kilobytes. What I wanted to do was to compress the JSON, but I heard by just enabling compression on the web server (Apache), I will accomplish the same thing. Now I have done that, and enabled compression. But how do I get the size of the DOWNLOAD, and not the size of the file if I save it?
It's not quite as simple as just enabling compression on the web server. The HTTP request received by the server must include the Accept-Encoding header to indicate which compression scheme or schemes it support.
The most common is: Accept-Encoding: gzip.
You'd likely need to use a packet sniffer (fiddler or equivalent) to determine the difference in payload sizes when compressed versus decompressed. Most HTTP libraries I am aware of decompress the payload before passing it back to the calling code.
Is it possible to somehow disable header compression from the server side in HTTP2? For both the client-to-server and server-to-client communication. E.g. by setting compression table size to zero, or something? Perhaps only using the static table?
(This would simplify implementation considerably, which would be more in line with the thinking behind HTTP1, simplicity. The other (huge) benefits of HTTP2 would remain. In other words, is HPack mandatory?)
EDIT, rewording for clarity...
Is it possible to, from the server, make it so that no compression is used? This, in order to avoid implementing a complex part of HTTP2. I kind of suspect that it is not possible (because it would essentially make HTTP2 slower). But maybe the client is required to obey some setting from the server, either before (really unlikely because slow) it starts sending compressed data, or if it can restart uncompressed sending after new setting (more likely, I feel).
It's possible to disable the compression without setting the table size to zero.
You can choose compression method like; use static table only, use dynamic table only, use huffman encoding, use string literal encoding.
If you send header as string literal (no compression), you have to set flag as so.
We would like to transfer a XML to a WEB API that can accept text as well as binary data.
What is the best way to transfer it in terms of traffic size?
Is it better to transfer it as clear text or as Stream of Binary data?
If you are concerned that the XML data you want to transfer is too large, then you can try using compression, gzip compression being the most popular. Web API has some built-in functionality for this but you could also "roll your own" if you like, for example if you want a different compression algorithm.
Fortunately, there's plenty of code around to help with compressing and decompressing your data stream. Take a look at the following:
MS nuget: https://www.nuget.org/packages/Microsoft.AspNet.WebApi.MessageHandlers.Compression/
http://benfoster.io/blog/aspnet-web-api-compression (blog article with a link to GitHub code)
https://github.com/benfoster/Fabrik.Common/tree/master/src/Fabrik.Common.WebAPI (the GitHub code mentioned above)
(SO) Compression filter for Web API
Finally, you could consider using Expect: 100-Continue. If an API client is about to send a request with a large entity body, like a POST, PUT, or PATCH, they can send “Expect: 100-continue” in their HTTP headers, and wait for a “100 Continue” response before sending their entity body. This allows the API server to verify much of the validity of the request before wasting bandwidth to return an error response (such as a 401 or a 403). Supporting this functionality is not very common, but it can improve API responsiveness and reduce bandwidth in some scenarios. (RFC2616 §8.2.3).
While I appreciate an answer full of links can be problematic if those links go out-of-date or get deleted, explaining Web API compression here is just too large a subject. I hope my answer steers you in a useful direction.