I got the API returning string response over 5MB.
As I call the API on chrome and see Network tap of Developer Tool.
Waiting (TTFB) : 189.65 ms
Content Download : 4.97 s
Why does it take too long for content download comparing to downloading 5MB single file via ftp?
P.S : It takes 1 sec to download 5MB single file via ftp from the same server which API server(Spring) is running.
Because it is limited by not only the network speed (which is obviously not the reason if you can download the same quantity faster by another way), but also the server's ability to provide the data. The developer tools just told you that it took the server (189.65ms - travel time) to generate the first byte, and (189.65ms + 4.97s - travel time) to generate the last byte; but you can't know what it was doing in the meantime. For all you know, the code could have included sleep(4); you can't know why it took as much time unless you profile the serverside process that provided the data.
Related
I have an empty API in laravel code with nginx & apache server. Now the problem is that the API takes a lot of time if I try with different files and the API responds quickly if I try with blank data.
Case 1 : I called the API with a blank request, that time response time will be only 228ms.
Case 2 : I called the API with a 5MB file request, then file transfer taking too much time. that's why response time will be too long that is 15.58s.
So how can we reduce transfer start time in apache or nginx server, Is there any server configuration or any other things that i missed up ?
When I searched on google it said keep all your versions up-to-date and use php-fpm, but when I configure php-fpm and http2 protocol on my server I noticed that it takes more time than above. All server versions are up-to-date with the current version.
This has more to do with the fact one request has nothing to process so the response will be prompt, whereas, the other request requires actual processing and so a response will take as long as the server requires to process the content of your request.
Depending on the size of the file and your server configuration, you might hit a limit which will result in a timeout response.
A solution to the issue you're encountering is to chunk your file upload. There are a few packages available so that you don't have to write that functionality yourself, an example of such a package is the Pionl Laravel Chunk Upload.
An alternative solution would be to offload the file processing to a Queue.
Update
When I searched on google about chunking it's not best solution for
small file like 5-10 MB. It's a best solution for big files like
50-100 MB. So is there any server side chunking configuration or any
other things or can i use this library to chunking a small files ?
According to the library document this is a web library. What should I
use if my API is calling from Android and iOS apps?
True, chunking might not be the best solution for smaller files but it is worthwhile knowing about. My recommendation would be to use some client-side logic to determine if sending the file in chunks is required. On the server use a Queue to process the file upload in the background allowing the request to continue processing without waiting on the upload and a response to be sent back to the client (iOS/Android app) in a timely manner.
I am looking at load testing of Progressive download video files with 100 user load. The testing tool I am looking at is Jmeter, Load Runner and NeoLoad. Though the script required for creating the load is very simpler, it consist of couple of request and it is able to make the connection with server and start the downloading of the file. Though I understand that the progressive technology is pretty old, but still it is used in many website. The question I have is around the strategy.
Do we need to download the complete file(i.e. 1.3 GB in my case)?
Even we looked at saving the response as file, the resources such as Network and disk I/O are at max? Does this strategy suits here?
Can we have some another strategy where we can engage the server for the duration and test for issues underlying with connection issues and transmission speed?
Depending on your use case, there is Seeking feature so theoretically you should be able to specify start offset so you will not have to get the whole file. Also you can consider using HTTP Header Manager to send Range header
If your target is to verify that the file has been downloaded fully and it is not broken you can tick "Save Response as MD5 Hash" box on "Advanced" tab of the HTTP Request sampler - this way you will save at least 130 GB of disk space. MD5 checksum can be verified using i.e. MD5Hex Assertion
The main idea of the load testing is simulating real application usage with 100% accuracy. Not knowing the requirements of your product it is impossible to come up with suggestions, however JMeter can be configured to behave pretty much like real browser does so it is a viable option.
See Load Testing Video Streaming with JMeter: Learn How article for more information if needed.
I'm in the process of writing an app that builds a table of Trello card data based on multiple API calls, and while the app works I'm finding the performance degrades considerably the longer it runs. The initial calls take a couple of seconds while later calls (after 100 runs or so) take upwards of a minute.
Looking at the XHR Network tab on my Chrome console, I can see the bulk of the call is taken by the 'Content Download' phase of the Ajax call. I'm curious as to whether this means the issue is with my application or if the problem resides with the API I'm trying to call? I'm a bit of a novice so my terminology is probably not appropriate here.
The Content Download time is the time during which your content is downloaded from the server.
Very long time can be due to slow connection client-side or server-side.
As you can see TTFB (time to first byte) is about 200ms. So your server is starting sending data after 200ms. Your server process seems to be OK.
You can click on the Explanation link for further information.
I'm trying to increase performance of webpage. I'm using ReactJS + webpack, which compiles my jsx files into one file - search.bundle.js. And server is returning this file for 2-3 seconds. File size is ~200KB. Is file size the only reason?
On local server it works pretty well. But on remote webserver it is really slow..
There is Google Map and listing of items on page, which I get using ajax request. This is recursive request (while not get enough data, or timeout) which is called in componentDidMount, but as I understand it can't because it can start request items only after script is loaded on page.
So is there any way to achive more faster downloading this script? Or I should just try to reduce size of script?
And some data from headers tab:
on local:
on remote:
The answer to this question is that script has a source map available. With Chrome's Developer Tools panel open, it makes transparent requests to any available source map files (which it does not show you).
These files are often absolutely massive (mine are 3MB+) and take considerable time to pull on every refresh. Nginx specifically also handles them very poorly, at least in my setup. I think setting the max sendfile chunk size to be lower helps.
I'm using a combination of InternetSetFilePointer, and InternetReadFile, to support a resumable download. So when I begin downloading a file, I check to see if we already have part of it, and call InternetSetFilePointer using the size of what we have, and then I begin reading. This works ... however, here's my observation:
If I've downloaded 90% of a file, and it took 2 minutes to do so, when I resume, the first call to InternetReadFile takes approximately 2 minutes return! I can only conclude that behind the scenes, it's simply downloading the file from the beginning, throwing out everything up to the point I gave to InternetSetFilePointer, and then it returns with the "next" data.
So the questions are:
1) does WinInet "simulate" InternetSetFilePointer, or does it really give that info to the server?
2) Is there a way to make WinInet truly skip to the desired seek point, assuming the HTTP server supports doing so?
The server I'm downloading from is an Amazon S3 server, which I'm 99.9% sure supports resume.
The proper way to do this finally turned up in some extended searching, and here's a link to a good article about it:
http://www.clevercomponents.com/articles/article015/resuming.asp
Basically, to do correct HTTP resuming, you need to use the "Range" HTTP header, such that the server can correctly portion the resource for your requests.