I'm attempting to scale image(s) that have any given name. I've found my script is failing on files that have numbers in the name. "0% financing", "24 Hour", etc. The other files are working fine, so it's not the script itself. I get:
[image2 # 0x7fbce2008000] Could find no file with path '/path/to/0% image.jpeg' and index in the range 0-4
How can I tell ffmpeg that this isn't a search pattern, or sequential numbered files ? There's only 1 jpeg in each location, and I do not have control of the file names to change them.
-update-
I've figured out the command
ffmpeg -pattern_type none -i /path/to/0%\ image/0%\ image.jpeg -vf scale=320:-1 /path/to/0%\ image/0%\ image.out.jpeg
gets me past the initial problem, but the output won't work because I can't get it now to escape the final argument. If I am in the directory (so no path) and change the output to just out.jpeg it will work, so I'm confident the first error is corrected.
Now I need to figure out how to use spaces in the path in the output argument? I've tried surrounding it in quotes:
"0% image.out.jpeg"
regular escapes:
0%\ image.out.jpeg
and surrounding it in quotes and using escapes at the same time:
"0%\ image.out.jpeg"
Related
I would be so glad to have some help with this.
I have video rushes in this form : black > sequence > black > sequence...
I need the start/end timecodes of each sequences (no black) to create segments on another software so I wondering if it's possible with the blackdetect filter of FFmpeg to output only the sequences data in a csv file. No problem with the simple command line to have the blacks but I'm still at this point with unsuccess others test.
The goal is automate this for multiple files by import the csv in a RPA.
Thank you very much.
You cannot do that with ffmpeg by itself, but with a bit of help from a script (i.e., Python) you can decode the frame metadata. The following filtergraph outputs the black detect output to stdout:
ffmpeg -i input -vf blackdetect,metadata=print:file=- -f null dev/null`
Your script shall capture the pipe output, and parse the output to retrieve the pts and detect either black_start and black_end and reformat and output to a csv file.
If your other program is flexible, you can alternately specify output file name file=xxx and do the parsing there.
I have This Parisienne-type Font file and I want to write on a gif using that font but the output font is never Parisienne, it always comes out as normal Arial
the command
ffmpeg -i C:\Users\1997\www\post2\css\back2.gif -vf "drawtext=fontfile='C\:\\Windows\\Fonts\\2.ttf':text='Darbuka 70':fontcolor=#5c391685:fontsize=160::x=(w-text_w)/2:y=20" C:\Users\1997\www\post2\css\output.mp4
This is how the font file looks:
the output:
It's likely due to not having enough escape characters (\). You need one for FFmpeg and double them for shell. So, try
fontfile='C\\\\:/Windows/Fonts/2.ttf'
Four \'s worked when I tested in Python, but if it doesn't add/subtract a pair at a time to see which works for you.
Last, if you want to see if fontfile option is set properly or not, add -loglevel debug option and look in the log for a line like:
[Parsed_drawtext_2 # 0000015e54962a80] Setting 'fontfile' to value 'C:/Windows/Fonts/2.ttf'
If it's not properly escaped, you'd see a log like:
[Parsed_drawtext_2 # 000001c010782440] Setting 'fontfile' to value 'C'
[Parsed_drawtext_2 # 000001c010782440] Setting 'text' to value '/Windows/Fonts/2.ttf'
P.S., I cheated and used / for directory separator. If you want to use \, the same rule applies. You need a several \'s for each \. See documentation
I have about 600 books in PDF format where the filename is in the format:
AuthorForename AuthorSurname - Title (Date).pdf
For example:
Foo Z. Bar - Writing Scripts for Idiots (2017)
Bar Foo - Fun with PDFs (2016)
The metadata is unfortunately missing for pretty much all of them so when I import them into Calibre the Author field is blank.
I'm trying to write a script that will take everything that appears before the '-', removes the trailing space, and then adds it as the author in the PDF metadata using exiftool.
So far I have the following:
for i in "*.pdf";
do exiftool -author=$(echo $i | sed 's/-.*//' | sed 's/[ \t]*$//') "$i";
done
When trying to run it, however, the following is returned:
Error: File not found - Z.
Error: File not found - Bar
Error: File not found - *.pdf
0 image files updated
3 files weren't updated due to errors
What about the -author= phrase is breaking here? Please could someone enlighten me?
You don't need to script this. In fact, doing so will be much slower than letting exiftool do it by itself as you would require exiftool to startup once for every file.
Try this
exiftool -ext pdf '-author<${filename;s/\s+-.*//}' /path/to/target/directory
Breakdown:
-ext pdf process only PDF files
-author the tag to copy to
< The copy from another tag option. In this case, the filename will be treated as a pseudo-tag
${filename;s/\s+-.*//} Copying from the filename, but first performing a regex on it. In this case, looking for 1 or more spaces, a dash, and the rest of the name and removing it.
Add -r if you want to recurse into subdirectories. Add -overwrite_original to avoid making backupfiles with _original added to the filename.
The error with your first command was that the value you wanted to assign had spaces in it and needed to be enclosed by quotes.
I'm basically downloading some images from a website using wget to then append them into a PDF file using the command line program "convert". But this last thing seems not to work.
I'm getting all the .jpg images and storing them into one folder with no problems, but when I try to merge them into the PDF file, it always reminds with the last appended image. I've read of the convert's -append argument, but it still won't work.
This is how my code looks like:
for file in *.jpg
do
convert "${file}" -append "myfile.pdf"
done
But as logical as it seems, myfile.pdf always ends up having only the last jpg appended image.
I know that using convert like:
convert img1.jpg img2.jpg img3.jpg myfile.pdf
Would do the trick. But as I don't know how many images will I have in the download directory, I cannot hardcode the arguments, so I guess a loop for each image in that directory as I'm trying would be the best solution.
Does anybody know how to achieve my goal? Any help will be much appreciated.
Thanks in advance.
bash automatically expands wildcard arguments (unless if they are quoted or escaped) so even if convert does not support wildcard expansion, bash does. So you could just do
convert *.jpg myfile.pdf
note that if there are too many files, this can result with "arglist too long". But that should be OK for several hundred files.
If your file name follows a pattern like img1.jpg img2.jpg ..... . Then you may also use bash range:
convert img{1..5}.jpg
this will work for img1.jpg img2.jpg img3.jpg img4.jpg img5.jpg . You can change your range as per your requirement.
For converting all the jpg files , answer is already present in other answer by #Jean-François Fabre.
I'd like to be able to add/edit video metadata titles to multiple files at once or with a single command, but I don't know how to tell ffmpeg to do this.
I read a similar post on the Ubuntu Forums, but I have never used string manipulation in Linux before, so the commands I'm seeing in the post are way out of my comprehension at the moment, and much of the discussion goes over my head.
I've got all of my video files in a filename format that includes the show name, the episode number, and episode title. For example:
show_name - episode_number - episode_title.extension
Bleach - 001 - A Shinigami Is Born!.avi
Is there a simple way to read the title and episode number from the filename and put it into a metadata tag without having to go through each and every file manually?
EDIT 1: So I found out that I can iterate through files in a directory, and echo the filename, and I was told by a friend to try bash to parse the strings and return values from that to use in the ffmpeg command line. The problem is, I have absolutely no idea how to do this. The string manipulation in bash is very confusing on first look, and I can't seem to get it to output what I want into my variables. My test bash:
for file in "Bleach - 206 - The Past Chapter Begins! The Truth from 110 Years Ago.mkv"; do extension=${file##*.} showName=${file%% *} episode=${file:9:3}; echo Extension: $extension Show: $showName Episode: $episode; done
That outputs
Extension: mkv Show: Bleach Episode: 206
Which are all the variables I'm going to need, I just don't know how to move those to be run in ffmpeg now.
EDIT 2: I believe I was able, through much trial and error, to find a bash command that would do exactly what I wanted.
for file in *; do newname=${file:0:-4}_2 ext=${file##*.} filename=${file} showname=${file%% *} episode=${file:9:3} nameext=${file##*- } title=${nameext%.*}; ffmpeg -i "$filename" -metadata title="$title" -metadata track=$episode -metadata album=$showname -c copy "$newname.$ext"; mv -f "$newname.$ext" "$filename"; done
This lets me parse the information from the filename, copy it to some variables, and then run ffmpeg using those variables. It outputs to a second file, then moves that file to the original location, overwriting the original. One could remove that section out if you're not sure about how it's going to parse your files, but I'm glad I was able to get a solution that works for me.