aria2c save download result table to a file - aria2

I'm downloading multiple links using aria2c using a list of links. At the end of the download run, aria2c outputs a summary table on the terminal like so:
Download Results:
gid |stat|avg speed |path/URI
======+====+===========+============
1d11bc|OK | 23KiB/s|/mnt/72B627...
I want to:
Make the table show the original links from where the download happened
Save this table to a file.
In the docs all I've found is --download-result=<OPT> which might help towards 1 but I can't find anything hinting towards 2. Is there any option in aria2c to save that summary table? If not, then is there a way to capture it from terminal output but not have to capture all the other stuff?
This might be something very obvious but I'm not able to find anything for this right now. Just for completeness, i'm on an Ubuntu OS and using aria2c in the Terminal.

With logging --log and --log-level maybe you can generate the table again.

Related

Can't see files with Symlink

I need my client to be able to see the file in the directory they are allowed on. So I soft link the directory they are allowed on but can't see the files inside even tho they have the right (rwx).
ex:
/home/user1/project1.link/(couple of files)**
/clients/client_shamwow/project1/(couples of files)
**: Can't see the files.
This is the line I used:
ln -s /clients/client_shamwow/projet_prod /home/user1/projet_prod
is there something wrong that I am doing so they can't see the files in project_prod or I should use something else?
Your command doesn't match your example, but I assume you mean /home/user1/project1.link is a soft (symbolic) link, and when you run ls it lists just that name, rather than the contents of the directory the link points to. If that's the case, add the -L option to your ls command.
ls -lL /home/user1/project1.link
The man page says:
-L, --dereference
when showing file information for a symbolic link, show information
for the file the link references rather than for the link itself
Another way is simply to append /. to the end of your command, as in
ls -l /home/user1/project1.link/.
If that doesn't answer your question, I think you need to be more clear, and perhaps clean up the inconsistencies in your question. Even show some real output and the commands you ran.
Solved. No idea what happend. I just recreated the link the exact same way I did before and now I am able to see AND modify the files as the user1 w/o him being able to go anywhere else than what is in the folder project_prod. Thx for your time :)

wget: delete incomplete files

I'm currently using a bash script to download several images using wget.
Unfortunately the server I am downloading from is less than reliable and therefore sometimes when I'm downloading a file, the server will disconnect and the script will move onto the next file, leaving the previous one incomplete.
In order to remedy this I've tried to add a second line after the script fetches all incomplete files using:
wget -c myurl.com/image{1..3}.png
This seems to work as wget goes back and completes download of the files, but the problem then comes from this: ImageMagick which I use to stich the images in a pdf, claims there are errors with the headers of the images.
My thought of what to with deleting the incomplete files is:
wget myurl.com/image{1..3}.png
wget -rmincompletefiles
wget -N myurl.com/image{1..3}.png
convert *.png mypdf.pdf
So the question is, what can I use in place of -rmincompletefiles that actually exists, or is there a better I should be approaching this issue?
I made surprising discovery when attempting to implement tvm's suggestion.
It turns out, and this something I didn't realize, that when you run wget -N, wget actually checks file sizes and verifies they are the same. If they are not, the files are deleted and then downloaded again.
So cool tip if you're having the same issue I am!
I've found this solution to work for my use case.
From the answer:
wget http://www.example.com/mysql.zip -O mysql.zip || rm -f mysql.zip
This way, the file will only be deleted if an error or cancellation occurred.
Well, I would try hard to download the files with wget (you can specify extra parameters like larger --timeout to give the server some extra time). wget assumes certain things about the partial downloads and even with proper resume, they can sometimes end up mangled (unless you check their eg. MD5 sums by other means).
Since you are using convert and bash, there will be most likely another tool available from the Imagemagick package - namely identify.
While certain features are surely poorly documented, it has one awesome functionality - it can identify broken (or partially downloaded images).
➜ ~ identify b.jpg; echo $?
identify.im6: Invalid JPEG file structure: ...
1
It will return exit status 1 if you call it on the inconsistent image. You can remove these inconsistent images using simple loop such as:
for i in *.png;
do identify "$i" || rm -f "$i";
done
Then I would try to download again the files that are broken.

How to download multiple numbered images from a website in an easy manner?

I'd like to download multiple numbered images from a website.
The images are structured like this:
http://website.com/images/foo1bar.jpg
http://website.com/images/foo2bar.jpg
http://website.com/images/foo3bar.jpg
... And I'd like to download all of the images within a specific interval.
Are there simple browser addons that could do this, or should I use "wget" or the like?
Thank you for your time.
Crudely, on Unix-like systems:
#!/bin/sh
for i in {1..3}
do
wget http://website.com/images/foo"$i"bar.jpg
done
Try googling "bash for loop".
Edit LOL! Indeed, in a haste I omitted the name of the very program that downloads the image files. Also, this goes into a text editor, then you save it with an arbitrary file name, make it executable with the command
chmod u+x the_file_name
and finally you run it with
./the_file_name

See what process is accessing a file in Mac OS X

Note: This quesiton is NOT show me which files are in use. The file is not currently in use. The file will be in use at some unknown point in the future. At that point, I want to know what process accessed the file.
I would like to be able to track a file and see which process is touching that file. Is that possible? I know that I can see the list of open processes in activity monitor but I think it's happening to quickly for me to see it. The reason for this is I'm using a framework and I think the system version of the framework is being used instead of the debug version and I'd like to see which process is touching it.
That's simple: sudo fs_usage | grep [path_to_file]
lsof will list open files, but it can be a bit awkward for momentary touches (eg, if the file isn't open when lsof runs, it doesn't show).
I think your best bet would be fernLightning's fseventer.app. It's "nagware", and allows you to watch (graphically) the fsevents API in real-time.
But I spent 2 minutes Googling and found your answer here.
$ lsof | grep [whatever]
Where [whatever] is replaced with the filename you're looking for.
With this, you can see which program is desperately holding onto your
about-to-be-trashed file. Once you exit that program, your trash will
empty.
The faster way is:
$ lsof -r [path_to_file]
This solution doesn't require the root password and gives you back the following, clear, result:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
Finder 497 JR7 21r REG 1,2 246223 33241712 image.jpg
QuickLook 1007 JR7 txt REG 1,2 246223 33241712 image.jpg
The -r argument keeps the command alive and should log any new file touched by the process you want to track.
Another option is Sloth. It's a free, open source GUI for LSOF that others have mentioned.

How to resume an ftp download at any point? (shell script, wget option)?

I want to download a huge file from an ftp server in chunks of 50-100MB each. At each point, I want to be able to set the "starting" point and the length of the chunk I want. I won't have the "previous" chunks saved locally (i.e. I can't ask the program to "resume" the download).
What is the best way of going about that? I use wget mostly, but would something else be better?
I'm really interested in a pre-built/in-build function rather than using a library for this purpose... Since wget/ftp (also, I think) allow resumption of downloads, I don't see if that would be problem... (I can't figure out from all the options though!)
I don't want to keep the entire huge file at my end, just process it in chunks... fyi all - I'm having a look at continue FTP download afther reconnect which seems interesting..
Use wget with:
-c option
Extracted from man pages:
-c / --continue
Continue getting a partially-downloaded file. This is useful when you want to finish up a download started by a previous instance of Wget, or by another program. For instance:
wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z
If there is a file named ls-lR.Z in the current directory, Wget will assume that it is the first portion of the remote file, and will ask the server to continue the retrieval from an offset equal to the length of the local file.
For those who'd like to use command-line curl, here goes:
curl -u user:passwd -C - -o <partial_downloaded_file> ftp://<ftp_path>
(leave out -u user:pass for anonymous access)
I'd recommend interfacing with libcurl from the language of your choice.

Resources