Is it possible to somehow disable header compression from the server side in HTTP2? For both the client-to-server and server-to-client communication. E.g. by setting compression table size to zero, or something? Perhaps only using the static table?
(This would simplify implementation considerably, which would be more in line with the thinking behind HTTP1, simplicity. The other (huge) benefits of HTTP2 would remain. In other words, is HPack mandatory?)
EDIT, rewording for clarity...
Is it possible to, from the server, make it so that no compression is used? This, in order to avoid implementing a complex part of HTTP2. I kind of suspect that it is not possible (because it would essentially make HTTP2 slower). But maybe the client is required to obey some setting from the server, either before (really unlikely because slow) it starts sending compressed data, or if it can restart uncompressed sending after new setting (more likely, I feel).
It's possible to disable the compression without setting the table size to zero.
You can choose compression method like; use static table only, use dynamic table only, use huffman encoding, use string literal encoding.
If you send header as string literal (no compression), you have to set flag as so.
Related
My question is in the title, this provides context to help you understand my confusion. Everything is sent over https.
My understanding of base 64 encoding is that it is a way of representing binary data as text, such that the text is safe to transmit across networks or the internet because it avoids anything that might be interpreted as a control code by the various possible protocols that might be involved at some point.
Given this understanding, I am confused why everything sent to over the internet is not base 64 encoded. When is it safe not to base 64 encode something before sending it? I understand that not everything understands or expects to receive things in base 64, but my question is why doesn't everything expect and work with this if it is the only way to send data without the possibility it could be interpreted as control codes?
I am designing an Android app and server API such that the app can use the API to send data to the server. There are some potentially large SQLite database files the client will be sending to the server (I know this sounds strange, yes it needs to send the entire database files). They are being gzipped prior to uploading. I know there is also a header that can be used to indicate this: Content-Encoding: gzip. Would it be safe to compress the data and send it with this header without base 64 encoding it? If not, why does such a header exist if it is not safe to use? I mean, if you base 64 encode it first and then compress it, you undo the point of base 64 encoding and it is not at that point base 64 encoded. If you compress it first and then base 64 encode it, that header would no longer be valid as it is not in the compressed format at that point. We actually don't want to use the header because we want to save the files in a compressed state, and using the header will cause the server to decompress it prior to our API code running. I'm only asking this to further clarify why I am confused about whether it is safe to send gzip compressed data without base 64 encoding it.
My best guess is that it depends on if what you are sending is binary data or not. If you are sending binary data, it should be base 64 encoded as the final step before uploading it. But if you are sending text data, you may not need to do this. However it still seems to my logic, this might still depends on the character encoding used. Perhaps some character encodings can result in sending data that could be interpreted as a control code? If this is true, which character encodings are safe to send without base 64 encoding them as the final step prior to sending it? If I am correct about this, it implies you should only use the that gzip header if you are sending compressed text that has not been base 64 encoded. Does compressing it create the possibility of something that could be interpreted as a control code?
I realize this was rather long, so I will repeat my primary questions (the title) here: Is either Gzip compressed binary data or uncompressed text safe to transmit, or should it be base 64 encoded as the final step before sending it? Okay I lied there is one more question involved in this. Would sending gzip compressed text always be safe to send without base 64 encoding it at the end, no matter which character encoding it had prior to compression?
My understanding of base 64 encoding is that it is a way of representing binary data as text,
Specifically, as text consisting of characters drawn from a 64-character set, plus a couple of additional characters serving special purposes.
such that the text is safe to transmit across networks or the internet because it avoids anything that might be interpreted as a control code by the various possible protocols that might be involved at some point.
That's a bit of an overstatement. For two endpoints to communicate with each other, they need to agree on one protocol. If another protocol becomes involved along the way, then it is the responsibility of the endpoints for that transmission to handle any needed encoding considerations for it.
What bytes and byte combinations can successfully be conveyed is a matter of the protocol in use, and there are plenty that handle binary data just fine.
At one time there was also an issue that some networks were not 8-bit clean, so that bytes with numeric values greater than 127 could not be conveyed across those networks, but that is not a practical concern today.
Given this understanding, I am confused why everything sent to over the internet is not base 64 encoded.
Given that the understanding you expressed is seriously flawed, it is not surprising that you are confused.
When is it safe not to base 64 encode something before sending it?
It is not only safe but essential to avoid base 64 encoding when the recipient of the transmission expects something different. The two or more parties to a given transmission must agree about the protocol to be used. That establishes the acceptable parameters of the communication. Although Base 64 is an available option for part or all of a message, it is by no means the only one, nor is it necessarily the best one for binary data, much less for data that are textual to begin with.
I understand that not everything understands or expects to receive things in base 64, but my question is why doesn't everything expect and work with this if it is the only way to send data without the possibility it could be interpreted as control codes?
Because it is not by any means the only way to avoid data being misinterpreted.
They are being gzipped prior to uploading. I know there is also a header that can be used to indicate this: Content-Encoding: gzip. Would it be safe to compress the data and send it with this header without base 64 encoding it?
It would be expected to transfer such data without base-64 encoding it. HTTP(S) handles binary data just fine. The Content-Encoding header tells the recipient how to interpret the message body, and if it specifies a binary content type (such as gzip) then binary data conforming to that content type are what the recipient will expect.
My best guess is that it depends on if what you are sending is binary data or not.
No. These days, for all practical intents and purposes, it depends only on what application-layer protocol you are using for the transmission. If it specifies that some or all of the message is to be base-64 encoded (according to a particular base-64 scheme, as there are more than one) then that's what the sender must do and how the receiver will interpret the message. If the protocol does not specify that, then the sender must not perform base-64 encoding. Some protocols afford the sender the option to make this choice, but those also provide a way for the sender to indicate inside the transmission what choice has been made.
Is either Gzip compressed binary data or uncompressed text safe to transmit, or should it be base 64 encoded as the final step before sending it?
Neither is inherently unsafe to transmit on today's networks. Whether data are base-64 encoded for transmission is a question of agreement between sender and receiver.
Okay I lied there is one more question involved in this. Would sending gzip compressed text always be safe to send without base 64 encoding it at the end, no matter which character encoding it had prior to compression?
The character encoding of the uncompressed text is not a factor in whether a gzipped version can be safely and successfully conveyed. But it probably matters for the receiver or anyone to whom they forward that data to understand the uncompressed text correctly. If you intend to accommodate multiple character encodings then you will want to provide a way to indicate which applies to each text.
I am implementing a WebSocket server in C and was wondering what's the purpose of the text/binary frame indicators (opcode 1 and 2). Why they are there? In the end in both cases the payload contains bits. And when there is a protocol using websocket or so then I know what expect in the data. Is it because when it's a text message I can be sure that payload only contains UTF-8 valid data?
I will start my answer by pointing out that WebSockets are often implemented with a Javascript client in mind (i.e., a browser).
When you're using C, the different opcode might be used in different ways, but when using Javascript, this difference controls the type of the data in the event (Blob vs. String).
As you point out in the question, a string is always a valid UTF-8 stream of bytes, whereas a blob isn't.
This affects some data transport schemes (such as JSON parsing, which requires a UTF-8 valid stream).
Obviously, in C, this opcode could be used in different ways, but it would be better to use the opcode in the same manner as a potential javascript client.
P.S.
There are a number of Websocket C libraries and frameworks out there (I'm the author of facil.io).
Unless this is a study project, I would consider using one of the established frameworks / libraries.
So in one of my projects i have to create a http cache to handle multiple API calls to the server. I read about this ETag header that can be used with a conditional GET to minimize server load and enact caching.. However i have a problem with generating the E-Tag.. I can use the LAST_UPDATED_TIMESTAMP of the resource as the ETag or hash it using some sort of hashing algo like MD5. but what would be the best way to do this? Is there any cons in using raw timestamp as the Etag??
any supportive answer is highly appreciated .. Thanks in advance....Cheers!!
If your timestamp has enough precision so that you can guarantee it will change any time the resource changes, then you can use an encoding of the timestamp (the header value needs to be ascii).
But bear in mind that ETag may not save you much. It's just a cache revalidation header, so you will still get as many requests from clients, just some will be conditional, and you may then be able to avoid sending payload back if the ETag didn't change, but you will still incur some work figuring that out (maybe a bunch less work, so could be worth it).
In fact several versions of IIS used the file timestamp to generate an Etag. We tripped over that when building WinGate's cache module, when a whole bunch of files with the same timestmap ended up with the same Etag, and we learned that an Etag is only valid in the context of the request URI.
I was writing the spec on how to organize the query parameters that are sent in a HTTP Request, and I came up with the following:
All parameters a prefixed with the entity to which they belong, an example "a.b", which is read "b of entity a", that way each parameter would be clearly mapped to the corresponding entity, but what if there were two different entities that share a query paramater?, to avoid repetition and request size I came up with the following micro format. To have a request wide entity called shared each property of shared will represent a property that is shared among entities, e.g.
POST /app/my/resource HTTP/1.1
a.p = v
b.p = v
c.p = v
d.p = v
Here it is clear that property p is shared among a,b,c and d so this could be sent as
POST /app/my/resource HTTP/1.1
shared.p = a:b:c:d%v
Now, the request is smaller and I'm being a bit more DRY, however this adds an extra burden to the server as it has to parse the string to process the values.
Probably in my example the differences are insignificant and I could chose either, but I'd like to know what do you think about it, what would you prefer, maybe the size of the request does not matter, or maybe the parsing of the string is not such a big deal when the length is short, but what happens when we scale the size of both the request and string which one would be better, what are the tradeoffs?
What you are showing is a compression algorithm. Beware that payloads often are compressed on protocol layer already (HTTP, gzip Content-Type, see HTTP compression examples). Compression algorithms are advanced enough to compress duplicate string-items, so probably you won't win much by a custom compression.
Generally try not to optimize prematurely. First show that you are having a response-time or payload-size issue and then optimize. Your compression algorithm itself is a good idea, but it makes payload more complicated as normal key/value pairs (xxx-form-urlencoded Content-Type). For maintenance reasons head for the simplest design possible.
Just to throw this out there, I think the answer would depend on what platform your back-end servers are running to process the requests. For example, the last time that I checked, the Perl-based mod_perl can parse those strings much faster something like ASP.NET.
I have a perl script that converts strings to different encodings, like base64, ASCII or hex (both ways). Now I am writing ajax front end for it, and my question is; if I want to automate the detection of the encoding of the string submitted, is it more efficient to perform regex search on the string submitted with javascript before I send it to the server, or is it faster to leave it for the perl script to figure out what type of string?
To clarify, I am asking which of these two is better:
String submitted
Javascript detects the encoding
AJAX submits encoding and the string to perl script
Perl script returns decoded string
or
String submitted
AJAX submits the string to perl script
Perl script detects encoding and returns decoded string
Is there a particular rule of thumb where this type of processing should be performed, and what do you think is better (meaning faster) implementation?
You must validate your data on the server. Period. Otherwise you'll be sailing off into uncharted waters as soon as some two-bit wannabe "hacker" passes you a base64 string and a tag claiming that your javascript thinks it's hex.
Given this, it's up to you whether you want to also detect encoding on the client side. This has some potential benefits, since it allows you to not send data to the server at all if it's encoded in an invalid fashion or to tell the user what encoding was detected and allow them to correct it if it's an ambiguous case (e.g., hex digits are a subset of the base64 character set, so any hex string could potentially be base64). Just remember that, if an encoding gets passed to the server by the client, the server must still sanity-check the received encoding specifier and be prepared to ignore it (or reject the request completely) if it's inappropriate for the corresponding data.
This depends on the scale.
If there will be a LOT of client requests to do this, it's definitely "faster" to do it on the client side (e.g. in JS before the Ajax call), since putting it on the server side causes the server to process ALL those requests whch will compete for server's CPU resources, whereas client side you will only do one detection per client.
If you only anticipate very few concurrent requests, then doing it in Perl is probably marginally faster since Perl's regex implementation is likely better/faster than JavaScript (I don't have any stats to back this up, though) and presumably the server has better CPU.
But I would not really think that the server side margin would be terribly big considering the whole processing shouldn't take that long on either side, so I'd advise to go with client-side checking since that (as per the first paragraph) scales better.
If the performance difference between the two really matters to you a lot, you should actually implement both and benchmark under both the average anticipated and the maximum projected client loads.