Why does a change in ISP affect the download progress? - download

I was downloading a file on my pc using my mobile hotspot from Sim_1 and 50% of the download was completed. When I changed the mobile data source to Sim_2, the download began from the beginning. :(
Is this problem caused by the change in ISP or maybe due to some other issues?
PS: Both Sim_1 and Sim_2 are of different networks.

I think in order to resume a download both the server that the resource is being fetched from and the client need to support this.
They would have to agree on the progress made and resume the download from the location in the byte stream that the client has reached. This might be achievable in HTTP using something like the partial requests using range headers like accept-range https://developer.mozilla.org/en-US/docs/Web/HTTP/Range_requests
Essentially, I think that the ability for your download manager client to be able to change network and resume from that point would depend on the end resource being requested as the server that is providing it would need to be able to do that. If that server is only capable of sending the total bytes from start to end then that is the best your client can hope for.

Related

Transfer file takes too much time

I have an empty API in laravel code with nginx & apache server. Now the problem is that the API takes a lot of time if I try with different files and the API responds quickly if I try with blank data.
Case 1 : I called the API with a blank request, that time response time will be only 228ms.
Case 2 : I called the API with a 5MB file request, then file transfer taking too much time. that's why response time will be too long that is 15.58s.
So how can we reduce transfer start time in apache or nginx server, Is there any server configuration or any other things that i missed up ?
When I searched on google it said keep all your versions up-to-date and use php-fpm, but when I configure php-fpm and http2 protocol on my server I noticed that it takes more time than above. All server versions are up-to-date with the current version.
This has more to do with the fact one request has nothing to process so the response will be prompt, whereas, the other request requires actual processing and so a response will take as long as the server requires to process the content of your request.
Depending on the size of the file and your server configuration, you might hit a limit which will result in a timeout response.
A solution to the issue you're encountering is to chunk your file upload. There are a few packages available so that you don't have to write that functionality yourself, an example of such a package is the Pionl Laravel Chunk Upload.
An alternative solution would be to offload the file processing to a Queue.
Update
When I searched on google about chunking it's not best solution for
small file like 5-10 MB. It's a best solution for big files like
50-100 MB. So is there any server side chunking configuration or any
other things or can i use this library to chunking a small files ?
According to the library document this is a web library. What should I
use if my API is calling from Android and iOS apps?
True, chunking might not be the best solution for smaller files but it is worthwhile knowing about. My recommendation would be to use some client-side logic to determine if sending the file in chunks is required. On the server use a Queue to process the file upload in the background allowing the request to continue processing without waiting on the upload and a response to be sent back to the client (iOS/Android app) in a timely manner.

What exactly happens when we disable history in Firefox?

I have a problem with live video streams in the system I am developing that happens only in Firefox and only in normal mode.
The player correctly loads the stream, but after a few seconds can't continue to load and just keeps trying and trying forever.
This doesn't happen in Chrome, nor if I load the page in Private Mode, nor with normal videos. Just with live streams, just in Firefox, just in normal mode.
This happens both in local development (home, remote connection) and in the corporative cloud.
It's an Angular 8/NodeJs system and the player I use is Clappr. I changed to Video.js and the problem continued.
The stream is coming from a load balancer with 6 children servers, each one with an apache server who have a proxy to an icecast server that originates the stream.
[load balancer] < [6 child servers with apache server proxy] > [icecast server]
I work for a very large company that has an IPS system installed. It was the first thing I thought. But the IPS team could not find any blocked traffic. Also if it was that, why would the traffic not be blocked in private mode?
So I thought about trying to pinpoint what the exact configuration is different in private mode that does the trick and I figured out that disabling all history (not only navigation and downloads or forms) makes it work too.
Does anyone knows what exactly happens when the navigation history is disabled? Besides not saving history, does it have an impact on something else? Some type of cache, network or something like that? Anyone has any idea about how to make stream work without disabling history? I can't ask my users to disable history just to use my system.
EDIT
One thing that may be relevant to the issue is that in Firefox it doesn't show LIVE label when the transmission starts. It shows a negative number. Maybe this could create some problem with the history.
I couldn't find the information on what exactly happens when we disable history in Firefox, but I could solve the problem of playing the stream in Firefox, so I won't accept this answer, but leave it here for future reference in case someone has a similar problem.
I solved it by adding ?nocache=<random integer of length 10> to the video url. Please notice that if you already have some parameter in your url, you can't have 2 ? characters in your url and have to mix parameters correctly.

Load testing of Progressing download(Video) or Larger files download

I am looking at load testing of Progressive download video files with 100 user load. The testing tool I am looking at is Jmeter, Load Runner and NeoLoad. Though the script required for creating the load is very simpler, it consist of couple of request and it is able to make the connection with server and start the downloading of the file. Though I understand that the progressive technology is pretty old, but still it is used in many website. The question I have is around the strategy.
Do we need to download the complete file(i.e. 1.3 GB in my case)?
Even we looked at saving the response as file, the resources such as Network and disk I/O are at max? Does this strategy suits here?
Can we have some another strategy where we can engage the server for the duration and test for issues underlying with connection issues and transmission speed?
Depending on your use case, there is Seeking feature so theoretically you should be able to specify start offset so you will not have to get the whole file. Also you can consider using HTTP Header Manager to send Range header
If your target is to verify that the file has been downloaded fully and it is not broken you can tick "Save Response as MD5 Hash" box on "Advanced" tab of the HTTP Request sampler - this way you will save at least 130 GB of disk space. MD5 checksum can be verified using i.e. MD5Hex Assertion
The main idea of the load testing is simulating real application usage with 100% accuracy. Not knowing the requirements of your product it is impossible to come up with suggestions, however JMeter can be configured to behave pretty much like real browser does so it is a viable option.
See Load Testing Video Streaming with JMeter: Learn How article for more information if needed.

Can WinInet resume file downloads without starting over?

I'm using a combination of InternetSetFilePointer, and InternetReadFile, to support a resumable download. So when I begin downloading a file, I check to see if we already have part of it, and call InternetSetFilePointer using the size of what we have, and then I begin reading. This works ... however, here's my observation:
If I've downloaded 90% of a file, and it took 2 minutes to do so, when I resume, the first call to InternetReadFile takes approximately 2 minutes return! I can only conclude that behind the scenes, it's simply downloading the file from the beginning, throwing out everything up to the point I gave to InternetSetFilePointer, and then it returns with the "next" data.
So the questions are:
1) does WinInet "simulate" InternetSetFilePointer, or does it really give that info to the server?
2) Is there a way to make WinInet truly skip to the desired seek point, assuming the HTTP server supports doing so?
The server I'm downloading from is an Amazon S3 server, which I'm 99.9% sure supports resume.
The proper way to do this finally turned up in some extended searching, and here's a link to a good article about it:
http://www.clevercomponents.com/articles/article015/resuming.asp
Basically, to do correct HTTP resuming, you need to use the "Range" HTTP header, such that the server can correctly portion the resource for your requests.

Handle Unstable Internet Connection in Server-Client App

what technology can i use to manage unstable internet connection in a Server-Client App. i know mainly PHP (+Zend Framework), learning C# & ASP.NET MVC. i heard WCF/MSMQ is something that can help... but how ... is there something PHP (which i am more familiar) can do? but it is also good to know a .NET alternative if its better
the background:
client***s*** will connect to server db to do CRUDs. but if the internet connection fails this will not be possible. so how do i fix this?
the solution used now was have localhost db's. at the end of the day, all clients will upload to server and morning download "consolidated" db from server. this is not foolproof as upload/download may still fail. and considering large amts of data transfered, it actually increases the chances.
UPDATE: is there a PHP/Zend Framework/MySQL replacement for MSMQ/WCF?
WCF can help, because it supports various technologies for reliable message transfer.
One thing that might help you is to have the clients make their data changes locally, then upload those changes to a reliable message queue. You would not upload all changes in a single transaction. You might upload 10 at a time, possibly one at a time. As the uploaded messages are processed on the server, the server would write the transaction results to another queue, unique to each client. After the upload (or maybe at the same time), the client would check that queue to see what the result of each upload was. If the result was success, then the client can remove their local database. If the result was a failure, then the client should try uploading it again.
Of course, you should always be careful that your attempts at error recovery don't make things worse. Too much retry traffic on a bad link may very well cause more traffic, which may itself need recovery, etc.
And, of course, the ultimate solution is to move towards links that are more reliable. Not necessarily faster, but just more reliable.

Resources