Aria2 has the ability to specify a magnet URI and it will save a torrent file. This file gets saved with a name of the hex encoded info hash with suffix .torrent.
Magnet URIs have an option for ?dn=, which is a Display Name. Is it possible to use this name when saving the torrent, so that
aria2c -d . --bt-metadata-only=true --bt-save-metadata=true "magnet:?xt=urn:btih:cf7da7ab4d4e6125567bd979994f13bb1f23dddd&dn=ubuntu-18.04.2-desktop-amd64.iso"
outputs ubuntu-18.04.2-desktop-amd64.iso.torrent instead of cf7da7ab4d4e6125567bd979994f13bb1f23dddd.torrent?
I couldn't find any direct option, but there is a workaround
aria2c -S, --show-files[=true|false]
Print file listing of .torrent, .meta4 and .metalink file and exit. More detailed
information will be listed in case of torrent file.
Using this and some greping, cutting you could do something like this..
mv hash.torrent "`aria2c -S hash.torrent | grep Name | cut -c7-`.torrent"
Related
I'm trying to use the update_info command in order to add some bookmarks to an existing pdf's metadata using pdftk and powershell.
I first dump the metadata into a file as follows:
pdftk .\test.pdf dump_data > test.info
Then, I edit the test.info file by adding the bookmarks, I believe I am using the right syntax. I save the test.info file and attempt to write the metadata to a new pdf file using update_info:
pdftk test.pdf update_info test.info output out.pdf
Unfortunately, I get a warning as follows:
pdftk Warning: unexpected case 1 in LoadDataFile(); continuing
out.pdf is generated, but contains no bookmarks. Just to be sure it is not a syntax problem, I also ran it without editing the metadata file, by simply overwriting the same metadata. I still got the same warning.
Why is this warning occurring? Why are no bookmarks getting written to my resulting pdf?
using redirection in that fashion
pdftk .\test.pdf dump_data > test.info
will cause this known problem by building wrong file structure, so change to
pdftk .\test.pdf dump_data output test.info
In addition check your alterations are correctly balanced (and no unusual characters) then save the edited output file in the same encoding.
Note:- you may need to consider
Use dump_data_utf8 and update_info_utf8 in order to properly display characters in scripts other than Latin (e. g. oriental CJK)
I used pdftk --help >pdftk-help.txt to find the answer.
With credit to the previous answer, the following creates a text file of the information parameters: pdftk aaa.pdf dump_data output info.txt
Edit the info.txt file as needed.
The pdftk update_info option creates a new pdf file, leaving the original pdf untouched. Use: pdftk aaa.pdf update_info info.txt output bbb.pdf
I have tons of photo's and I assign a keyword called "background" to photo's that I want as my background.
My Photo's are located in a folder called "Photos" that folder had lots of sub folders.
Is there a terminal command that finds all photo's in folder "Photos" that have the keyword "background" and copy those photos to let's say "Folder B"?
I do have Exiftool by the way, that might help.
Ralph
edit:
'Achtergrond' means Background
I tried now: exiftool -o ~/test/MapA -if '$Subject=Achtergrond' ~/test/MapB
Also tried with this:
-if '$Subject eq "Achtergrond"'
exiftool -G1 -a -s -api MDItemTags=1 File.jpg| grep Achtergrond
[MacOS] MDItemKeywords : Achtergrond
[XMP-dc] Subject : Achtergrond
exiftool File.JPG | grep Achtergrond
Subject : Achtergrond
and I tried:
exiftool -o ~/test/MapA -if '$XMP-dc:Subject eq "Achtergrond"' ~/test/MapB
1 directories scanned
0 image files read
What am I missing here?
The basic command to do this with exiftool would be
exiftool -o '/path/to/Folder B/' -if '$Keywords=~/background/i' /path/to/Photos/
You do need to check where your keywords are actually stored. Depending upon what program you used to tag them, the background tag might be stored in XMP:Subject, IPTC:Keywords, or MDItemKeywords. Maybe even MDItemUserTags, I'm not overly familiar with how the Mac system tags works.
I'd suggest running
exiftool -G1 -a -s -api MDItemTags=1 FILE.JPG
on a file that you know contains the "background" tag and looking for that tag that contains "background". If it's something other than Keywords, then replace Keywords in the above command with that tag name
Breakdown of the above command:
-o '/path/to/Folder B/': This tells exiftool to copy files to the path '/path/to/Folder B/'. The trailing slash is needed if the output directory doesn't already exist, as otherwise exiftool will just create a file named "Folder B". Quotes are need around the path if there are spaces in it or the spaces need to be escaped with a backslash.
-if '$Keywords=~/background/i': This performs a case insensitive RegEx check on the Keywords tag to see if it contains "background". If it does, then the command will be executed on that file, otherwise that file will be skipped.
Text file (filename: listing.txt) with names of files as its contents:
ace.pdf
123.pdf
hello.pdf
Wanted to download these files from url http://www.myurl.com/
In bash, tried to merged these together and download the files using wget eg:
http://www.myurl.com/ace.pdf
http://www.myurl.com/123.pdf
http://www.myurl.com/hello.pdf
Tried variations of the following but without success:
for i in $(cat listing.txt); do wget http://www.myurl.com/$i; done
No need to use cat and loop. You can use xargs for this:
xargs -I {} wget http://www.myurl.com/{} < listing.txt
Actually, wget has options which can avoid loops & external programs completely.
-i file
--input-file=file
Read URLs from a local or external file. If - is specified as file, URLs are read from the standard input. (Use ./- to read from a file literally named -.)
If this function is used, no URLs need be present on the command line. If there are URLs both on the command line and in an input file, those on the command lines will be the first ones to be retrieved. If --force-html
is not specified, then file should consist of a series of URLs, one per line.
However, if you specify --force-html, the document will be regarded as html. In that case you may have problems with relative links, which you can solve either by adding "<base href="url">" to the documents or by
specifying --base=url on the command line.
If the file is an external one, the document will be automatically treated as html if the Content-Type matches text/html. Furthermore, the file's location will be implicitly used as base href if none was specified.
-B URL
--base=URL
Resolves relative links using URL as the point of reference, when reading links from an HTML file specified via the -i/--input-file option (together with --force-html, or when the input file was fetched remotely from a
server describing it as HTML). This is equivalent to the presence of a "BASE" tag in the HTML input file, with URL as the value for the "href" attribute.
For instance, if you specify http://foo/bar/a.html for URL, and Wget reads ../baz/b.html from the input file, it would be resolved to http://foo/baz/b.html.
Thus,
$ cat listing.txt
ace.pdf
123.pdf
hello.pdf
$ wget -B http://www.myurl.com/ -i listing.txt
This will download all the 3 files.
How would i get the mime type of a file-handle without saving it to disk?
What i mean is a file that is not saved to disk, rather: i extracted it from an archive and plan on piping it to another script.
Say i extracted the file like this:
tar -xOzf images.tar.gz images/logo.jpg | myscript
Now inside myscript I would like to check the mime type of the file before further processing it. How would it go about this?
as some people think my comment above is helpful i post it as an answer.
the file-command is able to determine a file's mime type on the fly/when being piped. It ist able to read a file from stdin - printing the file's --mime-type briefly/in a short manner when passing -b. Considering your example you probably want to extract a single file from an archive and dertermine its file/mime type.
$ tar -xOzf foo.tar.gz file_in_archive.txt | file -b --mime-type -
text/plain
so for a simple text file extracted from an archive to stdout it could look like the example above. hope that helped. regards
I have a simple .csv file.
Is it possible to convert it to .xls using the command line tool ssconvert?
I would also need to specify the name of the sheet.
ln -s input.csv MySheetName
ssconvert MySheetName output.xls
The OP asked how to convert csv to xls while controlling the sheet name in the output.
The generated .xls file will use the name of the input CSV file as the sheet name, so you can symlink the .csv to anything you want (or rename the input file) to produce the desired result.
The previous answer implies that --list-exporters leads to a solution, but it merely lists exporter names with no information about their options, and no options are documented in the man page for xls-exporters. Experimentally, none of the exporters which can create .xls accept options (they fail with "The file saver does not take options" if you use -O).
Yes, it is possible.
You must specify names with extensions as input and output files.
For example:
ssconvert in.csv out.xls
Using --list-importers and --list-exporters options can take a look to available formats.