If the communication protocol is HTTPS, is adding checksum of any kind to the payload beneficial? to be exact.
Is it good to have checksum on the content body to ensure integrity?
Is it good to have checksum of files/multipart content to ensure integrity?
Both 1 & 2 is good to have
Neither is necessary
I am thinking more in terms of web services only.
Related
I am creating an application that allows users to submit JSON or Base64 image data via socket.io
The goal I am trying to achieve is:
if JSON is submitted, the message can have a maximum size of 1MB
if a Base64 image is submitted, the message can have a maximum size of 5MB
From the socket.io docs I can see that:
you can specify a maxHttpBufferSize option value that allows you to limit the maximum message size
namespaces allow you to split logic over a single connection
However, I can't figure out the correct way to get the functionality to work the way I have described above.
Would I need to:
set up 2 separate io instances on the server, one for JSON data and the other for Base64 images (therefore allowing me to set separate maxHttpBufferSize values for each), and then the client can use the correct instance, depending on what they want to submit (if so, what is the correct way of doing this?)
set up 1 instance with a maxHttpBufferSize of 5MB, and then add in my own custom logic to determine message sizes and prevent further actions if the data is JSON and over 1MB in size
set this up in some totally different way that I haven't thought of
Many thanks
From what I can see in the API, maxHttpBufferSize is a parameter for the underlying Engine.IO server (of which there is one instance per Socket.IO Server Instance). Obviously you're free to set up two servers but I doubt it makes sense to separate the system into two entirely different applications.
Talk of using Namespaces to separate logic is more about handling different messages at different endpoints (for example you would register a removeUserFromChat message handler to a user connecting via an /admin namespace, but you wouldn't want to register this to a user connecting via the /user namespace).
In the most recent socket server I set up, I defined my own protocol where part of the response would contain a HTTP status code, as well as a description that could be displayed to the user. For example I would return 200 on success. If I was uploading a file via a REST HTTP Interface, I would expect a 400 (BAD REQUEST) response if my request couldn't be processed - and I believe that this makes sense for your use case. Alternatively you could define your own custom 4XX error code if the file is too large, and handle this in your UI purely based on the code returned. Obviously you don't need to follow the HTTP protocol, and the design decisions are ultimately up to you, but in my opinion it makes sense to return some kind of error response in your message handler.
I suspect that the maxHttpBufferSize has different use at lower levels than your use case. When sending content over network, content is split into 'n bytes' of packets and when a application writes 'n' bytes, the network sends a packet over network (the less the n, more overhead due to network headers. The more the n, high latency because of waiting involved in accumulating n bytes before sending). Documentation is not clear about maxHttpBufferSize but it could be the packet size (n) configuration, not limit on the max data on connection.
It seems, http request header Content-Length might serve your purpose. It gives the actual object size based on that you can make a decision.
I have a need to be able to validate TOS/DSCP marks on response data from a set of HTTP servers. Would it be possible, given a list of target URLs to test, if there is a way in go to generate the HTTP request, and then be able to examine the response's TCP packet details in order to obtain the TOS value?
My assumption at this point is that it may require creating a socket, and then dynamically generating a TCP packet that contains the HTTP request payload. I've been searching around to see if there were any libraries that would aid in this task, but haven't found anything specific yet.
Note: a simple TCP connection will not provide enough data - the target servers in question will alter TOS/DSCP marks dynamically based on the HTTP server name (so essentially, a single physical server will respond with different TOS marks depending on the vHost requested), so it is important to be able to verify the TOS on actual HTTP response packets, and not something simple like a ping. The TOS values in the TCP 3-way handshake cannot be trusted either - it must be a packet containing the HTTP data.
I did end up solving this problem using gopacket/pcap and net/http.
In a nutshell, what I ended up doing is writing a function that creates a channel, and then calls a goroutine that does the actual packet capture and parsing. The goroutine passes the captured TOS value back to the channel, and then the original function does the http request, and then reads the channel to get the TOS result. Still a bit of a work-in-progress, but so far, this solution seems to be working fairly well.
I'm making simple http service with Spring Boot RestController, and what I was found, when I try to request via GET Json object I didn't get content-length in header and transfer-encoding becomes chunked.
With simple ResponseEntit<String> all headers set as expected.
What kind of problem may lead to this behavior?
Ths is not a problem, Transfer-encoding chuncked and no content length means that response was compressed. If compression is enabled in Spring boot it will compress responses larger than certain amount (2048 bytes by default). I think your ResponseEntit<String> is simply smaller than required for compression.
You can read more about compression settings in documentation.
If you want consistency you can either disable compressing, or set server.compression.min-response-size to a very small value. But I would suggest to keep it as is.
Working on JMeter and trying to send the soap request to server and shows the below error msg.
Error Msg:- Cannot process the message because the content type 'application/soap+msbin1' was not the expected type 'application/xml; charset=utf-8'.
We need help to encode XML to 'application/soap+msbin1' format.
Bit late to the party, but I encountered a similar issue - I had a template for SOAP request which uses embedded-binary XML (xop:Include cid="...") and had to scratch my head to figure out how to do that with the stock HTTP Request.
The answer: you can't - not in a simple way. To solve the issue, I ended up customizing JMeter (I also looked at HTTPRawRequest as well but it doesn't seem to support https and I would have to rewrite a lot of the test script to use that). Since HTTP request does 99% of the job, the quickest way to support binary data is to change the source code to handle binary data.
The main issues are two: the Function interface in JMeter is designed around returning String, not byte[]. So already __FileToString() (which I used to read an external binary file to use) encodes the content of the file . Secondly, the HTTP Request Sampler and HTTPHC4Impl itself (excluding the "upload file" bit) encodes the parts of the HTTP request before sending it over to the wire.
Changing that implied changes in Function, AbstractFunction, CompoundVariable and create a new function class FileToStringBinary which encode the binary data in a way that it can be decoded after (by changes made to HTTPHC4Impl).
If I have the time I'll find someplace where to post the idea and the source (can't submit to JMeter because my update to HTTPHC4Impl is limited to handle the specific requests I need to test, where the embedded binary is in a multipart/related part, and I have no time or inclination to handle the general cases), but if you still need help to make it work, drop a line.
I was reading Google performance document about HTTP caching at https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/http-caching, This document says that we should use ETags when possible. I am using ASP.NET Web Api 2.2. I am now trying to implement ETag in all of my public apis. I am thinking to implement ETag using MD5. I mean, I will hash the json response on each request using MD5. Is there any performance hit of using MD5.calculateHash on each request? My json response size are not too big(within the range of 1 to 20KB).
Yes there will be a performance hit. Calculating a hash takes time. However, you may find that the cost of calculating that hash is not significant in comparison to the performance saved by transferring unchanged bytes over the wire to the client.
However, there is no guarantee that you will get a perf improvement with Etags. It depends on many things. Are you going to regenerate the Etag on the server on every request to compare it with the incoming request? Or are you going to create a cache of etag values and invalidate them when the resource changes?
If you are going to regenerate the etag on every request then it is possible that the time spent pulling data from the database and formatting the representation will be significantly higher than the time it takes to send a few bytes over the wire. Especially if the entire representation can fit in a single network packet.
The key here is to ask if you really need the performance gain of Etags and is it worth the cost of doing the implementation. Setting cache control headers to enable client side private caching may give you all the benefits that you need without having to implement etags.
I have a number of posts that go into this subject in more detail:
http://www.bizcoder.com/using-etags-and-last-modified-headers-to-improve-performance-with-http-conditional-requests
http://bizcoder.com/implementing-conditional-request-handling-for-your-api