What can be reason for slow content downloading from webserver? - performance

I'm trying to increase performance of webpage. I'm using ReactJS + webpack, which compiles my jsx files into one file - search.bundle.js. And server is returning this file for 2-3 seconds. File size is ~200KB. Is file size the only reason?
On local server it works pretty well. But on remote webserver it is really slow..
There is Google Map and listing of items on page, which I get using ajax request. This is recursive request (while not get enough data, or timeout) which is called in componentDidMount, but as I understand it can't because it can start request items only after script is loaded on page.
So is there any way to achive more faster downloading this script? Or I should just try to reduce size of script?
And some data from headers tab:
on local:
on remote:

The answer to this question is that script has a source map available. With Chrome's Developer Tools panel open, it makes transparent requests to any available source map files (which it does not show you).
These files are often absolutely massive (mine are 3MB+) and take considerable time to pull on every refresh. Nginx specifically also handles them very poorly, at least in my setup. I think setting the max sendfile chunk size to be lower helps.

Related

Transfer file takes too much time

I have an empty API in laravel code with nginx & apache server. Now the problem is that the API takes a lot of time if I try with different files and the API responds quickly if I try with blank data.
Case 1 : I called the API with a blank request, that time response time will be only 228ms.
Case 2 : I called the API with a 5MB file request, then file transfer taking too much time. that's why response time will be too long that is 15.58s.
So how can we reduce transfer start time in apache or nginx server, Is there any server configuration or any other things that i missed up ?
When I searched on google it said keep all your versions up-to-date and use php-fpm, but when I configure php-fpm and http2 protocol on my server I noticed that it takes more time than above. All server versions are up-to-date with the current version.
This has more to do with the fact one request has nothing to process so the response will be prompt, whereas, the other request requires actual processing and so a response will take as long as the server requires to process the content of your request.
Depending on the size of the file and your server configuration, you might hit a limit which will result in a timeout response.
A solution to the issue you're encountering is to chunk your file upload. There are a few packages available so that you don't have to write that functionality yourself, an example of such a package is the Pionl Laravel Chunk Upload.
An alternative solution would be to offload the file processing to a Queue.
Update
When I searched on google about chunking it's not best solution for
small file like 5-10 MB. It's a best solution for big files like
50-100 MB. So is there any server side chunking configuration or any
other things or can i use this library to chunking a small files ?
According to the library document this is a web library. What should I
use if my API is calling from Android and iOS apps?
True, chunking might not be the best solution for smaller files but it is worthwhile knowing about. My recommendation would be to use some client-side logic to determine if sending the file in chunks is required. On the server use a Queue to process the file upload in the background allowing the request to continue processing without waiting on the upload and a response to be sent back to the client (iOS/Android app) in a timely manner.

Load testing of Progressing download(Video) or Larger files download

I am looking at load testing of Progressive download video files with 100 user load. The testing tool I am looking at is Jmeter, Load Runner and NeoLoad. Though the script required for creating the load is very simpler, it consist of couple of request and it is able to make the connection with server and start the downloading of the file. Though I understand that the progressive technology is pretty old, but still it is used in many website. The question I have is around the strategy.
Do we need to download the complete file(i.e. 1.3 GB in my case)?
Even we looked at saving the response as file, the resources such as Network and disk I/O are at max? Does this strategy suits here?
Can we have some another strategy where we can engage the server for the duration and test for issues underlying with connection issues and transmission speed?
Depending on your use case, there is Seeking feature so theoretically you should be able to specify start offset so you will not have to get the whole file. Also you can consider using HTTP Header Manager to send Range header
If your target is to verify that the file has been downloaded fully and it is not broken you can tick "Save Response as MD5 Hash" box on "Advanced" tab of the HTTP Request sampler - this way you will save at least 130 GB of disk space. MD5 checksum can be verified using i.e. MD5Hex Assertion
The main idea of the load testing is simulating real application usage with 100% accuracy. Not knowing the requirements of your product it is impossible to come up with suggestions, however JMeter can be configured to behave pretty much like real browser does so it is a viable option.
See Load Testing Video Streaming with JMeter: Learn How article for more information if needed.

Why chrome cached requests are taking time?

Even though Chrome is caching static files (JS, images, etc.,) in the Network tab, these files are taking sometime as shown in the below picture.
Where as many of the cached files are loading in just 0ms. Can someone please tell me even though the files are loading from cache, why are they loading in >0ms?
At first glance, it does look quite strange to see Chrome spending time downloading resources even though they are coming from the cache. It's not the time spent downloading from a web server you're seeing. Rather, I believe it is the time spent downloading from a local database cache.
The retrieval of any data has some amount of cost involved. The resources are essentially stored in a database in Chrome, and to retrieve data requires a lookup, which is not instant. As well as looking up the data in a table, there is likely some processing involved to push the correct data into memory, since the data is not stored exactly how it is going to be used. It is likely to be compressed, and decompressing data can be a slow process.
You can see in the Network tab that, although it appears to take 0 ms to retrieve some resources, when you look at the Timings tab, you will see that it is actually rounded down. For example, I see both 0.08 ms stalled and 0.02 ms download in the request below, despite it showing 0 ms in the grid.
Update:
I looked further into this and found that Chrome Extensions seem to have an effect on the retrieval times from both the cache and the web, particularly ones that inject content into the page. Adblock seems to be the cause of some delay for me - explanation above still very much applies for the rest.
Oddly, timings in Chrome are a bit... quirky... the time is not purely network time. If the JS engine gets blocked somehow, it is included in that total time...
If you hit this issue go to "timeline" tab and record a full timeline.

What about the server uptime when using CGI-binaries?

I have a need to convert some of my perl CGI-scripts to binaries.
But when I have a script of 100kb converted into binary it becomes about 2-3Mb. This is understood why, as compiler has to pack inside all the needed tools to execute the script.
The question is about the time of pages loading on the server, when they are binary. Say, if I have a binary perl-script "script", that answers on ajax requests and that binary weights about 3mb, will it reflect on AJAX requests? If, say, some users have low connection, will they wait for ages until all these 3Mb will be transferred? Or, the server WON'T send all the 3mb to a user, but just an answer (short XML/JSON whatsoever)?
Another case is when I have HTML page, that is generated by this binary perl-script on the server. User addresses his browser to the script, that weights 3Mb and after he has to get an HTML page. Will the user wait again, until the whole script is been loaded (every single byte form those 3Mb), or just wait the time that is needed to load EXACTLY the HTML page (say, 70Kb), and the rest mass will be run on the server-side only and won't make the user to wait for it?
Thanks!
Or, the server WON'T send all the 3mb to a user, but just an answer (short XML/JSON whatsoever)?
This.
The server executes the program. It sends the output of the program to the client.
There might be an impact on performance by bundling the script up (and it will probably be a negative one) but that has to do with how long it takes the server to run the program and nothing to do with how long it takes to send data back to the client over the network.
Wrapping/Packaging a perl script into a binary can be useful for ease of transport or installation. Some folks even use it as a (trivial) form of obfuscation. But in the end, the act of "Unpacking" the binary into usable components at the beginning of every CGI call will actually slow you down.
If you wish to improve performance in a CGI situation, you should seriously consider techniques that make your script persistent to eliminate startup time. mod_perl is an older solution to this problem. More modern solutions include FCGI or wrapping your script into it's own mini web server.
Now if you are delivering the script to a customer and a PHB requires wrapping for obfuscation purposes, then be comforted that the startup performance hit only occurs once if you write your script to be persistent.

Can WinInet resume file downloads without starting over?

I'm using a combination of InternetSetFilePointer, and InternetReadFile, to support a resumable download. So when I begin downloading a file, I check to see if we already have part of it, and call InternetSetFilePointer using the size of what we have, and then I begin reading. This works ... however, here's my observation:
If I've downloaded 90% of a file, and it took 2 minutes to do so, when I resume, the first call to InternetReadFile takes approximately 2 minutes return! I can only conclude that behind the scenes, it's simply downloading the file from the beginning, throwing out everything up to the point I gave to InternetSetFilePointer, and then it returns with the "next" data.
So the questions are:
1) does WinInet "simulate" InternetSetFilePointer, or does it really give that info to the server?
2) Is there a way to make WinInet truly skip to the desired seek point, assuming the HTTP server supports doing so?
The server I'm downloading from is an Amazon S3 server, which I'm 99.9% sure supports resume.
The proper way to do this finally turned up in some extended searching, and here's a link to a good article about it:
http://www.clevercomponents.com/articles/article015/resuming.asp
Basically, to do correct HTTP resuming, you need to use the "Range" HTTP header, such that the server can correctly portion the resource for your requests.

Resources