I'd like to be able to add/edit video metadata titles to multiple files at once or with a single command, but I don't know how to tell ffmpeg to do this.
I read a similar post on the Ubuntu Forums, but I have never used string manipulation in Linux before, so the commands I'm seeing in the post are way out of my comprehension at the moment, and much of the discussion goes over my head.
I've got all of my video files in a filename format that includes the show name, the episode number, and episode title. For example:
show_name - episode_number - episode_title.extension
Bleach - 001 - A Shinigami Is Born!.avi
Is there a simple way to read the title and episode number from the filename and put it into a metadata tag without having to go through each and every file manually?
EDIT 1: So I found out that I can iterate through files in a directory, and echo the filename, and I was told by a friend to try bash to parse the strings and return values from that to use in the ffmpeg command line. The problem is, I have absolutely no idea how to do this. The string manipulation in bash is very confusing on first look, and I can't seem to get it to output what I want into my variables. My test bash:
for file in "Bleach - 206 - The Past Chapter Begins! The Truth from 110 Years Ago.mkv"; do extension=${file##*.} showName=${file%% *} episode=${file:9:3}; echo Extension: $extension Show: $showName Episode: $episode; done
That outputs
Extension: mkv Show: Bleach Episode: 206
Which are all the variables I'm going to need, I just don't know how to move those to be run in ffmpeg now.
EDIT 2: I believe I was able, through much trial and error, to find a bash command that would do exactly what I wanted.
for file in *; do newname=${file:0:-4}_2 ext=${file##*.} filename=${file} showname=${file%% *} episode=${file:9:3} nameext=${file##*- } title=${nameext%.*}; ffmpeg -i "$filename" -metadata title="$title" -metadata track=$episode -metadata album=$showname -c copy "$newname.$ext"; mv -f "$newname.$ext" "$filename"; done
This lets me parse the information from the filename, copy it to some variables, and then run ffmpeg using those variables. It outputs to a second file, then moves that file to the original location, overwriting the original. One could remove that section out if you're not sure about how it's going to parse your files, but I'm glad I was able to get a solution that works for me.
Related
I want to download the data files from the URLs using a loop from 1945 to 2020, only one number changes in the URL,
The URLs are given below
https://www.metoc.navy.mil/jtwc/products/best-tracks/1945/1945s-bio/bio1945.zip
https://www.metoc.navy.mil/jtwc/products/best-tracks/1984/1984s-bio/bio1984.zip
https://www.metoc.navy.mil/jtwc/products/best-tracks/2020/2020s-bio/bio2020.zip
I tried the following code, but it throws an error
for i in {1945..2020}
do
wget "https://www.metoc.navy.mil/jtwc/products/best-tracks/$i/$is-bio/bio$i.zip"
done
I did changed your code slightly
for i in {1945..1947}
do
echo "https://www.metoc.navy.mil/jtwc/products/best-tracks/$i/$is-bio/bio$i.zip"
done
when run it does output
https://www.metoc.navy.mil/jtwc/products/best-tracks/1945/-bio/bio1945.zip
https://www.metoc.navy.mil/jtwc/products/best-tracks/1946/-bio/bio1946.zip
https://www.metoc.navy.mil/jtwc/products/best-tracks/1947/-bio/bio1947.zip
Notice that first one is not https://www.metoc.navy.mil/jtwc/products/best-tracks/1945/1945s-bio/bio1945.zip as you might expect - 2nd $i did not work as intended, as it is followed by s it was understand as variable is which is not defined. Enclose variable names in { } to avoid confusion, this code
for i in {1945..1947}
do
echo "https://www.metoc.navy.mil/jtwc/products/best-tracks/${i}/${i}s-bio/bio${i}.zip"
done
when run does output
https://www.metoc.navy.mil/jtwc/products/best-tracks/1945/1945s-bio/bio1945.zip
https://www.metoc.navy.mil/jtwc/products/best-tracks/1946/1946s-bio/bio1946.zip
https://www.metoc.navy.mil/jtwc/products/best-tracks/1947/1947s-bio/bio1947.zip
which is compliant with example you gave. Now you might either replace echo using wget or save output of code with echo to file named say urls.txt and then harness -i option of wget as follows
wget -i urls.txt
Note: for brevity sake I use 1945..1947 in place of 1945..2020
It directly worked, Thanks #Daweo
for i in {1945..2020}
do
wget "https://www.metoc.navy.mil/jtwc/products/best-tracks/${i}/${i}s-bio/bio${i}.zip"
done
I have a file text.txt which contains very basic latex/markdown. For example, it might be the following.
Here is some basic maths: $f(x) = ax + b$ defines a straight line, often called a "linear" function---but it's not _actually_ a linear function, eg $f(0) \ne 0$.
I would like to convert this into html using WebTeX. However, I don't want smart quotes---" should be outputted as basic straight lines, not curved on either end---or smart dashes------ should be literally three dashes, not an em-dash.
It seems that the smart option is good for this: pandoc manual, github 1, github 2. However, I can't quite work out the correct syntax. I have tried, for example, the following.
pandoc text.txt -f markdown-smart -t markdown-smart -s --webtex -o tex.html
Unfortunately this doesn't work.
I solved this while writing the question, so I'll post the answer below! (Spoiler alert: simply remove -t markdown-smart.)
Simply remove -t markdown-smart.
pandoc text.txt -f markdown-smart -s --webtex -o tex.html
I believe that this -t is saying "to markdown without smart". We are not trying to output markdown, but rather html. If the version with -t is viewed, then one sees that the code for embedding various images is included. If this is pasted into a markdown editor, then it should show up.
To get html, simply remove this.
I'd like to use ExifTool to batch-write metadata that have been previously saved in a text file.
Say I have a directory containing the following JPEG files:
001.jpg 002.jpg 003.jpg 004.jpg 005.jpg
I then create the file metadata.txt, which contains the file names followed by a colon, and I hand it out to a coworker, who will fill it with the needed metadata — in this case comma-separated IPTC keywords. The file would look like this after being finished:
001.jpg: Keyword, Keyword, Keyword
002.jpg: Keyword, Keyword, Keyword
003.jpg: Keyword, Keyword, Keyword
004.jpg: Keyword, Keyword, Keyword
005.jpg: Keyword, Keyword, Keyword
How would I go about feeding this file to ExifTool and making sure that the right keywords get saved to the right file? I'm also open to changing the structure of the file if that helps, for example by formatting it as CSV, JSON or YAML.
If you can change the format to a CSV file, then exiftool can directly read it with the -csv option.
You would have to reformat it in this way. The first row would have to have the header of "SourceFile" above the filenames and "Keywords" above the keywords. If the filenames don't include the path to the files, then command would have to be run from the same directory as the files. The whole keywords string need to be enclosed in quotes so they aren't read as a separate columns. The result would look like this:
SourceFile,Keywords
001.jpg,"KeywordA, KeywordB, KeywordC"
002.jpg,"KeywordD, KeywordE, KeywordF"
003.jpg,"KeywordG, KeywordH, KeywordI"
004.jpg,"KeywordJ, KeywordK, KeywordL"
005.jpg,"KeywordM, KeywordN, KeywordO"
At that point, your command would be
exiftool -csv=/path/to/file.csv -sep ", " /path/to/files
The -sep option is needed to make sure the keywords are treated as separate keywords rather than a single, long keyword.
This has an advantage over a script looping over the file contents and running exiftool once for each line. Exiftool's biggest performance hit is in its startup and running it in a loop will be very slow, especially on a large amount of files (see Common Mistake #3).
See ExifTool FAQ #26 for more details on reading from a csv file.
I believe the answer by #StarGeek is superior to mine, but I will leave mine for completeness and reference of a more basic, Luddite approach :-)
I think you want this:
#!/bin/bash
while IFS=': ' read file keywords ; do
exiftool -sep ", " -iptc:Keywords="$keywords" "$file"
done < list.txt
Here is the list.txt:
001.jpg: KeywordA, KeywordB, KeywordC
002.jpg: KeywordD, KeywordE, KeywordF
003.jpg: KeywordG, KeywordH, KeywordI
And here is a result:
exiftool -b -keywords 002.jpg
KeywordD
KeywordE
KeywordF
Many thanks to StarGeek for his corrections and explanations.
I have about 600 books in PDF format where the filename is in the format:
AuthorForename AuthorSurname - Title (Date).pdf
For example:
Foo Z. Bar - Writing Scripts for Idiots (2017)
Bar Foo - Fun with PDFs (2016)
The metadata is unfortunately missing for pretty much all of them so when I import them into Calibre the Author field is blank.
I'm trying to write a script that will take everything that appears before the '-', removes the trailing space, and then adds it as the author in the PDF metadata using exiftool.
So far I have the following:
for i in "*.pdf";
do exiftool -author=$(echo $i | sed 's/-.*//' | sed 's/[ \t]*$//') "$i";
done
When trying to run it, however, the following is returned:
Error: File not found - Z.
Error: File not found - Bar
Error: File not found - *.pdf
0 image files updated
3 files weren't updated due to errors
What about the -author= phrase is breaking here? Please could someone enlighten me?
You don't need to script this. In fact, doing so will be much slower than letting exiftool do it by itself as you would require exiftool to startup once for every file.
Try this
exiftool -ext pdf '-author<${filename;s/\s+-.*//}' /path/to/target/directory
Breakdown:
-ext pdf process only PDF files
-author the tag to copy to
< The copy from another tag option. In this case, the filename will be treated as a pseudo-tag
${filename;s/\s+-.*//} Copying from the filename, but first performing a regex on it. In this case, looking for 1 or more spaces, a dash, and the rest of the name and removing it.
Add -r if you want to recurse into subdirectories. Add -overwrite_original to avoid making backupfiles with _original added to the filename.
The error with your first command was that the value you wanted to assign had spaces in it and needed to be enclosed by quotes.
I'm attempting to scale image(s) that have any given name. I've found my script is failing on files that have numbers in the name. "0% financing", "24 Hour", etc. The other files are working fine, so it's not the script itself. I get:
[image2 # 0x7fbce2008000] Could find no file with path '/path/to/0% image.jpeg' and index in the range 0-4
How can I tell ffmpeg that this isn't a search pattern, or sequential numbered files ? There's only 1 jpeg in each location, and I do not have control of the file names to change them.
-update-
I've figured out the command
ffmpeg -pattern_type none -i /path/to/0%\ image/0%\ image.jpeg -vf scale=320:-1 /path/to/0%\ image/0%\ image.out.jpeg
gets me past the initial problem, but the output won't work because I can't get it now to escape the final argument. If I am in the directory (so no path) and change the output to just out.jpeg it will work, so I'm confident the first error is corrected.
Now I need to figure out how to use spaces in the path in the output argument? I've tried surrounding it in quotes:
"0% image.out.jpeg"
regular escapes:
0%\ image.out.jpeg
and surrounding it in quotes and using escapes at the same time:
"0%\ image.out.jpeg"