Aria2c pause and resume every 5 seconds - aria2

I have a problem with downloading files from a server, the problem is when I start to download files, download speed is good but after a couple, second download speed decreasing.
I am using aria2c and wants to there is any way to pause and resume download every 5 seconds?

I solve my problem by using aria2c RPC INTERFACE.
aria2 provides JSON-RPC over HTTP and XML-RPC over HTTP interfaces that offer basically the same functionality. aria2 also provides JSON-RPC over WebSocket
I write a script in Node.js which use aria2.pause and aria2.unpause for pause and resume every 5 seconds

Related

How to enable chrome:webrtc-internals in electron to download webrtc getstats files

we are trying to download the logs from chrome://webrtc-internals, The approach is open chrome://webrtc-internals in electron background when the call is accepted and download stats file when call is ended in electron app. Whats the approach?
The good approach would be sending the stats by calling getStats API on peerconnection. This way you can avoid to opening the chrome://webrtc-internals and sending it after the call terminates.
More information on getStats API can be found here - https://www.w3.org/TR/webrtc-stats/
You can poll the stats every 10-15 seconds and send it to your back-end.
You can upgrade Electron to above or equal to electron#9.x.
you can also download it by
window.open("chrome://webrtc-internals")
opening webrtc internals

How can I programmatically visit a website without using curl?

I'm trying to send a large number of queries to my server. When I open a certain website (with certain parameters), it sends a query to my server, and computation is done on my server.
Right now I'm opening the website repeatedly using curl, but when I do that, the website contents are downloaded to my computer, which takes a long time and is not necessary. I was wondering how I could either open the website without using curl, or use curl without actually downloading the webpage.
Do the requests in parallel, like this:
#!/bin/bash
url="http://your.server.com/path/to/page"
for i in {1..1000} ; do
# Start curl in background, throw away results
curl -s "$url" > /dev/null &
# Probably sleep a bit (randomize if you want)
sleep 0.1 # Yes, GNU sleep can sleep less than a second!
done
# Wait for background workers to finish
wait
curl still downloads the contents to your computer, but basically a test where the client does not downloads the content would not be very realistic.
Obviously the above solution is limited by the network bandwith of the test server - which is, usually worse than the bandwith of the web server. For realistic bandwith tests you would need to use multiple test servers.
However, especially when it comes to dynamic web pages not the bandwith might be the bottleneck, but the memory or CPU. For such stress tests, a single test machine might be enough.

Can WinInet resume file downloads without starting over?

I'm using a combination of InternetSetFilePointer, and InternetReadFile, to support a resumable download. So when I begin downloading a file, I check to see if we already have part of it, and call InternetSetFilePointer using the size of what we have, and then I begin reading. This works ... however, here's my observation:
If I've downloaded 90% of a file, and it took 2 minutes to do so, when I resume, the first call to InternetReadFile takes approximately 2 minutes return! I can only conclude that behind the scenes, it's simply downloading the file from the beginning, throwing out everything up to the point I gave to InternetSetFilePointer, and then it returns with the "next" data.
So the questions are:
1) does WinInet "simulate" InternetSetFilePointer, or does it really give that info to the server?
2) Is there a way to make WinInet truly skip to the desired seek point, assuming the HTTP server supports doing so?
The server I'm downloading from is an Amazon S3 server, which I'm 99.9% sure supports resume.
The proper way to do this finally turned up in some extended searching, and here's a link to a good article about it:
http://www.clevercomponents.com/articles/article015/resuming.asp
Basically, to do correct HTTP resuming, you need to use the "Range" HTTP header, such that the server can correctly portion the resource for your requests.

Downloading file from ftp server to local machine

How should I download file from an ftp server to my local machine using php? Is curl good for this?
you can use wget, or curl, from PHP. Be aware that the PHP script will wait for the download to finish. So if the download takes longer than your PHPs max_execution_time, your PHP script will be killed during runtime.
The best way to implement something like this is by doing it asynchronously, that way you don't slow down the execution of the PHP script which is probably supposed to serve a page later.
There are many ways to implement it asynchronously. The cleanest one is probably to use some queue like RabbitMQ or ZeroMQ over AMQP. A less clean one, which works as well, would be writing the urls to download into a file, and then implement a cronjob which minutely checkes this file for new urls to download and executes the download.
just some ideas...

Scripting a major multi-file multi-server FTP upload: is smart interrupted transfer resuming possible?

I'm trying to upload several hundred files to 10+ different servers. I previously accomplished this using FileZilla, but I'm trying to make it go using just common command-line tools and shell scripts so that it isn't dependent on working from a particular host.
Right now I have a shell script that takes a list of servers (in ftp://user:pass#host.com format) and spawns a new background instance of 'ftp ftp://user:pass#host.com < batch.file' for each server.
This works in principle, but as soon as the connection to a given server times out/resets/gets interrupted, it breaks. While all the other transfers keep going, I have no way of resuming whichever transfer(s) have been interrupted. The only way to know if this has happened is to check each receiving server by hand. This sucks!
Right now I'm looking at wput and lftp, but these would require installation on whichever host I want to run the upload from. Any suggestions on how to accomplish this in a simpler way?
I would recommend using rsync. It's really good at only transferring just the data that's been changed during a transfer. Much more efficient than FTP! More info on how to resume interrupted connections with an example can be found here. Hope that helps!

Resources