How do I get socket.io to automatically compress my message? - websocket

I am using socket.io on my client and server.
As an example, I am sending a 5MB PDF from the server to the client:
const pdf = require('fs').readFileSync('5mb.pdf');
socket.compress(true).send(pdf);
However I am not sure if this is actually compressing...
I say this because I tried the same thing as above but with compression disabled and the Length for both shows up the same in Chrome's Dev Tools:
How do I verify if the compression is actually working? and how do I find out how effective the compression is?

Turn off websocket secure if it's on.
Use wireshark to view the raw data.
I had the same question and couldn't find an answer.
Thus I did my analysis:
Chrome DevTools show that I send 3MB twice thus 6MB total.
In wireshark, I can see the beginning and end in raw text, but the middle is compressed looks like base64, Wireshark also only see around 1MB of data transferred total.
In Chrome DevTools, there is nothing I can see to inform me that it's compressed.

PDF is already a compressed format. Therefore, most of the time, trying to compress one pdf file leads to nearly no changes.
You probably should do the same test with an ASCII file.

Related

Transfer file takes too much time

I have an empty API in laravel code with nginx & apache server. Now the problem is that the API takes a lot of time if I try with different files and the API responds quickly if I try with blank data.
Case 1 : I called the API with a blank request, that time response time will be only 228ms.
Case 2 : I called the API with a 5MB file request, then file transfer taking too much time. that's why response time will be too long that is 15.58s.
So how can we reduce transfer start time in apache or nginx server, Is there any server configuration or any other things that i missed up ?
When I searched on google it said keep all your versions up-to-date and use php-fpm, but when I configure php-fpm and http2 protocol on my server I noticed that it takes more time than above. All server versions are up-to-date with the current version.
This has more to do with the fact one request has nothing to process so the response will be prompt, whereas, the other request requires actual processing and so a response will take as long as the server requires to process the content of your request.
Depending on the size of the file and your server configuration, you might hit a limit which will result in a timeout response.
A solution to the issue you're encountering is to chunk your file upload. There are a few packages available so that you don't have to write that functionality yourself, an example of such a package is the Pionl Laravel Chunk Upload.
An alternative solution would be to offload the file processing to a Queue.
Update
When I searched on google about chunking it's not best solution for
small file like 5-10 MB. It's a best solution for big files like
50-100 MB. So is there any server side chunking configuration or any
other things or can i use this library to chunking a small files ?
According to the library document this is a web library. What should I
use if my API is calling from Android and iOS apps?
True, chunking might not be the best solution for smaller files but it is worthwhile knowing about. My recommendation would be to use some client-side logic to determine if sending the file in chunks is required. On the server use a Queue to process the file upload in the background allowing the request to continue processing without waiting on the upload and a response to be sent back to the client (iOS/Android app) in a timely manner.

Load testing of Progressing download(Video) or Larger files download

I am looking at load testing of Progressive download video files with 100 user load. The testing tool I am looking at is Jmeter, Load Runner and NeoLoad. Though the script required for creating the load is very simpler, it consist of couple of request and it is able to make the connection with server and start the downloading of the file. Though I understand that the progressive technology is pretty old, but still it is used in many website. The question I have is around the strategy.
Do we need to download the complete file(i.e. 1.3 GB in my case)?
Even we looked at saving the response as file, the resources such as Network and disk I/O are at max? Does this strategy suits here?
Can we have some another strategy where we can engage the server for the duration and test for issues underlying with connection issues and transmission speed?
Depending on your use case, there is Seeking feature so theoretically you should be able to specify start offset so you will not have to get the whole file. Also you can consider using HTTP Header Manager to send Range header
If your target is to verify that the file has been downloaded fully and it is not broken you can tick "Save Response as MD5 Hash" box on "Advanced" tab of the HTTP Request sampler - this way you will save at least 130 GB of disk space. MD5 checksum can be verified using i.e. MD5Hex Assertion
The main idea of the load testing is simulating real application usage with 100% accuracy. Not knowing the requirements of your product it is impossible to come up with suggestions, however JMeter can be configured to behave pretty much like real browser does so it is a viable option.
See Load Testing Video Streaming with JMeter: Learn How article for more information if needed.

What can be reason for slow content downloading from webserver?

I'm trying to increase performance of webpage. I'm using ReactJS + webpack, which compiles my jsx files into one file - search.bundle.js. And server is returning this file for 2-3 seconds. File size is ~200KB. Is file size the only reason?
On local server it works pretty well. But on remote webserver it is really slow..
There is Google Map and listing of items on page, which I get using ajax request. This is recursive request (while not get enough data, or timeout) which is called in componentDidMount, but as I understand it can't because it can start request items only after script is loaded on page.
So is there any way to achive more faster downloading this script? Or I should just try to reduce size of script?
And some data from headers tab:
on local:
on remote:
The answer to this question is that script has a source map available. With Chrome's Developer Tools panel open, it makes transparent requests to any available source map files (which it does not show you).
These files are often absolutely massive (mine are 3MB+) and take considerable time to pull on every refresh. Nginx specifically also handles them very poorly, at least in my setup. I think setting the max sendfile chunk size to be lower helps.

Multiple images sent to client in one session

When you load an image in a browser, a so called handshake takes place between the client and the server where this picture is being sent.
This handshake then happens for every picture the client downloads. So if you have many images, downloading them can become slow, largely because the client and the server are alway remaking this handshake procedure. This slows connection speeds, especially if you are for instance on the Ipad. There are some methods to get by this, such as by sending only a single large image, and then use clips within that image, as if they are a single image. But that clutters the code etc. It complicates things.
Is there any way to send multiple images via a single handshake to the client thereby avoiding this clipping procedure as well as client-server communication overhead?
You can base64 encode it and send it via javascript. Expect about 4/3 size increase.
An example is shown here:
http://www.sweeting.org/mark/blog/2005/07/12/base64-encoded-images-embedded-in-html

Rails: How to determine size of http response the server delivers to the client?

I am running a Rails 3.2.2 app on Ruby 1.9.3 and on my production server i run a Phusion Passenger/Apache Server.
I deliver a relatively huge amount of data objects in JSON format which contain redundant data from a related model and I want to know how many bytes the server has to deliver and how the redundant content can be gziped by the server and how the redundant data influences the size of the http response that has to be shipped.
Thanks for your help!
If you just want to know in general how much data is being sent, use curl or wget to make the request and save to a file -- the size of the file is (approximately) the size of the response, not including the headers, which are typically small. gzip the file to see how much is actually sent over the wire.
Even cooler, use the developer tools of your favorite browser (which is Chrome, right?). Select Network tab, then click the GET (or PUT or POST) request that is executed and check things out. One tab will contain the headers of the response, one of which will likely contain a Content-Length header, assuming your server is set up to gzip, you'll be able to see how much compression you're getting (compare uncompressed to the Content-Length). The timings are all there, so you can see how much time it takes to get a connection, for the server to do the work, for the server to send back the data, etc. Brilliantly cool tools for understanding what's really happening under the covers.
But echoing the comment of Alex (capital A) -- if you're sending a ton of data in an AJAX request, you should be thinking of architecture and design in most cases. Not all, but most.

Resources