As part of a bash script I need to download a file with a known file size, but I'm having issues with the download itself. The file only gets partially downloaded every time. The server I'm downloading from doesn't seem particularly well set up - it doesn't report file size so wget (which I'm using currently) doesn't know how much data to expect. However, I know the exact size of the file, so theoretically I could tell wget what to expect. Does anyone know if there is a way to do this? I'm using wget at the moment but I can easily switch to curl if it will work better. I know how to adjust timeouts (which might help too), and retries, but I assume that for retries to work it needs to know the size of the file its downloading.
I have seen a couple of other questions indicating that it might be a cookie problem, but that's not it in my case. The actual size downloaded varies from <1Mb to 50Mb, so it looks more like some sort of lost connection.
Could you share the entire command to check what parameters are you using? however, it's a strange case.
You may use the -c parameter, restore the connection in the same point where it stopped after the retries.
Or you can try using --spider parameter. That checks if the file exists and get the info file in log.
Related
I've been trying to figure out how to use the Down gem to do restartable downloads in ruby.
So the scenario is downloading a large file over an unreliable link. The script should download as much of the file as it can in the timeout allotted (say it's a 5GB file, and the script is given 30 seconds). I would like that 30 second progress (partial file) to be saved so that next time the script is run, it will download another 30 seconds worth. This can happen until the complete file is downloaded and the partial file is turned into a complete file.
I feel like everything i need to accomplish this is in this gem, but it's unclear to me which features i should be using, and how much of it i need to code myself. (streaming? or caching?) I'm a ruby beginner, so i'm guessing i use the caching and just save the progress to a file myself, and enumerate for as many times as i have time.
How would you solve the problem? Would you use a different gem/method?
You probably don't need to build that yourself. Existing tools like curl and wget already have that functionality.
If you really want to build it yourself, you could perhaps take a look at how curl and wget do it (they're open-source, after all) and implement the same in Ruby.
A note in the README file said to ask questions here, so I am doing so.
The RIPEstat service has just shut off their own port 43 plain text service and now is forcing everyone to access their data using jq. I have zero experience with or knowledge of jq, but I am forced to give it a try. I have just built the thing successfully from sources (jq-1.5) on my crusty old FreeBSD 9.x system and the build completed OK, but one of the post-build verification tests (tests/onigtest) failed. I am looking at the test-suite.log file but none of what's in there means anything to me. (Unfortunately, I am new to stackoverflow also, and thus, I have no idea how to even upload a copy of that here so that the maintainer can peruse it.)
So, my questions:
1) Should I even worry about the failure of tests/onigtest?
2) If I should, then what should I do about this failure?
3) What is the best and/or most proper way for me to get a copy of the test-suite.log file to the maintainer(s)?
Should I even worry about the failure of tests/onigtest?
If the only failures are related to onigtest, then most likely only the regex filters will be affected.
what should I do about this failure?
According to the jq download page, there is a pre-built binary for FreeBSD, so you might try that.
From your brief description, it's not clear to me what exactly you did, but if you haven't already done so, you might also consider building an executable from a git clone of "master" as per the guidelines on the download page; see also https://github.com/stedolan/jq/wiki/Installation#or-build-from-source
What is the best and/or most proper way for me to get a copy of the test-suite.log file to the maintainer(s)?
You could create a ticket at https://github.com/stedolan/jq/issues
I've been having issues lately with a VBScript that will download files from an FTP server. It's been working ok for some years, but recently it's been downloading 0-byte files, they're just empty. This should never be the case.
In troubleshooting, I tried downloading a batch of files using the FTP command in CMD, and a different set of files were downloaded that were 0 bytes. (Btw, this is why I didn't add WINSCP and VBScript tags). Upon re-attempting the download of these empty files, I notice they download ok without issue, they come with data. What could be the issue here? What else can I try? Specifically, while keeping the OS, as I do not really have control over this.
Thanks.
I wrapped the download code into a loop, and in each iteration confirmed the file's existence and size by calling a function, and passing it the filename. Doing this allows me to log more and see that the 0-byte downloads happen more often than I thought.
I did a quick search and couldn't see anything that was relevant which I found strange as this seems like it'd be a common question. Maybe I'm just going the wrong way about it or being thick? Who knows.
Anyway, I am trying to set up a scheduled task that moves all the files in a folder from Server A to a folder in Server B. If this was a simple matter of copying them it would be fine as I'd already got that working using Core FTP and a batch file but I'd like them to be removed from Server A after the copy has taken place.
I was looking at the windows ftp commands but although I managed to log onto Server A successfully from Server B whenever I tried to do a command it just took a very long time and then disconnected.
Any help in this would be appreciated, I need it to be a schedule-able file but it doesn't matter whether it is a .bat, .vbs or anything else that I haven't though of?
Thanks,
Harry
You could use www.Dropbox.com
Why? For stability. Any home-brew ftp script that moves files, is prone to an undetected error in transmission, resulting in deleted files.
I'm looking for a good non-interactive, command line FTP client to be run from a Rakefile. Like Weex, but better. Weex has different problems (for me):
It stores its config file in my home dir. I want the FTP config to be part of my project and weex doesn't have a --config-file option or something.
The behavior of ignoring files seems to be completely buggy. It doesn't remove files which it should, it doesn't let me specify relative paths, even though I do it according to the man page's instructions, etc. I've been struggling with it for an hour now and it is just completely inexplicable.
I tried running rsync over FTPFS/FUSE, but that is dead slow because FTP doesn't store mtimes, which makes rsync diff every file. Plus, there are some refresh problems and other bugs that cause access failure (http://bugs.gentoo.org/208168).
I'm stuck with FTP, unfortunately. Any help is appreciated.
Perhaps something from the ncftp suite (http://www.ncftp.com/ncftp/)? This has the ability to specify a config file of your choice and tools to operate non-interactively (ncftpget/ncftpput).
It doesn't appear to have ignore functionality, but hopefully this was helpful to you..
I've used lftp in the past with good results. It's installed by default in many distributions and offers pretty sophisticated functionality (including a couple ways to exclude files).
try sitecopy: http://www.manyfish.co.uk/sitecopy/
The trouble with lftp is that it is very slow for mirroring--which I suppose you want to do since you have been using weex.
Unfortunately, both weex and sitecopy have very limited proxy handling, so if you need to go through a HTTP proxy, lftp may still be your best bet.