How to tell if ffmpeg errored or not? - ffmpeg

The Situation:
I'm using ffmpeg (via .net) to save video files. I can get the output from ffmpeg but I dont know how can I customize the output to have better result.
My Problem:
My problem is, there is no certain difference between successful and failed operation.
last line of success:
video:1006kB audio:134kB subtitle:0 global headers:0kB muxing overhead 0.943510%
last lines from fails
c:\x\test-9-8/30/2012-9:29:56-AM.mp4: Invalid argument
rtmp://cdn.tv/cdn-live39/definst/stream01: Unknown error occurred
My Question:
Is there an option (or command line parameter) to add some sort of return code (200: success, 500: error, etc)
Thanks!
PS: I saw this topic How to tell if ffmpeg errored? but there is no number before/after last line. I think the last version doesnt have number anymore.

I know this is very old but as i came across and found no other reliable answer and after some more testing:
The suggestion with checking for return of 0 is in general a good advice - but does not help in all cases. The other idea with checking if the file exists is also good - but again - does not help in all cases.
For example when the input file is a mp3 file that has an embedded cover - then ffmpeg does (in my tests) use this image and extracts that one as an (unusable) video file.
What i do now is to have debug level output and parse it for the number of muxed packets.
ffmpeg -i "wildlife.mp4" -c:v copy -an -sn "out.mp4" -y -loglevel debug 2> wildlife.txt
With a regex i search for this text:
Output stream .+ (video): [0-9][0-9]+ packets muxed \([0-9][0-9]+ bytes\)
(this assumes that every video has more than 9 packets - could of course be optimized for really short videos).
Of course for RTMP or other settings the output may differ but i think to parse the full output stream is the only option.

You could just check the exit code returned by ffmpeg. It should return 0 on success, anything else means it failed.

You can run ffmpeg in -v error mode and have it return errors into a text file, see here: https://superuser.com/questions/100288/how-can-i-check-the-integrity-of-a-video-file-avi-mpeg-mp4 You can combine this with encoding without the null output but you will only be able to read the results from the text file.
Or you can have an additional script that will follow-up on the errors. Here is a Python example, which checks for file integrity, notice the if stdout clause. This will basically re-check encoded file if you need to see normal output first.
Solution 1:
import subprocess
import os
import sys
import re
def check_video(files):
if type(files) == str:
files = [files]
for file in files:
print(f"Checking {file}...")
command_line = f'ffmpeg -v error -i "{file}" -map 0:v -map 0:a? -vcodec copy -acodec copy -f null -'
base_name = os.path.splitext(file)[0]
extension = os.path.splitext(file)[1]
process = subprocess.Popen(command_line, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
stdout, stderr = process.communicate()
return_code = process.returncode
pid = process.pid
print(f"Base: {base_name}")
print(f"Extension: {extension}")
print(f"RC: {return_code}")
if stdout:
allowed_errs = ["invalid as first byte of an EBML number"]
stdout_detect = stdout.decode().split("\n")
for error in allowed_errs:
if error not in stdout_detect[0] or len(stdout_detect) > 2:
print(f"Errors!")
print(os.path.splitext(file)[1])
os.rename(file, f"{file}.error")
with open(f"{file}.error.log", "w") as errfile:
if stdout:
errfile.write(stdout.decode())
if stderr:
errfile.write(stderr.decode())
else:
print("Minor problems detected.")
else:
print("File OK.")
process.wait()
if __name__ == "__main__":
files = sys.argv[1:]
# files = ["a.mkv"]
check_video(files)
Solution 2:
with subprocess.Popen(command_line,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
universal_newlines=True) as self.process:
for line in self.process.stdout:
print(line, end='')
return_code = self.process.wait()
From here, you can do whatever you like with each line, like checking for error keywords in them. I think ffmpeg has some standard of error reporting (https://ffmpeg.org/doxygen/trunk/group__lavu__error.html). With this solution, output will be displayed to you same as with directly running ffmpeg. Source: https://stackoverflow.com/a/4417735/1793606. Tested with Python 3.10
Solution 3:
Also, you can set err_detect for ffmpeg itself, that should reflect in the return code

Related

How get centroid value of an audio file using FFMPEG aspectralstats

I'm new to FFMPEG and I'm having a really hard time understanding the documentation: https://ffmpeg.org/ffmpeg-all.html#aspectralstats
I want to get the centroid value for an audio file using command line.
ffmpeg -i file.wav -af aspectralstats=measure=centroid -f null -
I get the following errors
[Parsed_aspectralstats_0 # 000002a19b1b9380] Option 'measure' not found
[AVFilterGraph # 000002a19b1c99c0] Error initializing filter 'aspectralstats' with args 'measure=centroid'
Error reinitializing filters!
Failed to inject frame into filter network: Option not found
Error while processing the decoded data for stream #0:0
Conversion failed!
What am I doing wrong?
The measure option was added mere 4 weeks ago. So, yeah, you probably missed it by a couple days. Grab the latest snapshot if you want to only retrieve the centroids. The snapshot you have should get you the centroids along with other parameters if you just call aspectralstats (no options).
Also, the aspectralstats outputs only goes to the frame metadata and not printed on stdout by default. So you need to append ametadata=print:file=- to your -af.
ffmpeg -i file.wav -af aspectralstats=measure=centroid,ametadata=print:file=- -f null -
<Shameless plug> FYI, if you're calling it from Python, I have implemented an interface for this in ffmpegio if interested.</sp>

Twitch Stream as input for ffmpeg

My objective is to take a twitch video stream and generate an image sequence from it without having to create an intermediary file. I found out that ffmpeg can take a video and turn it into a video and turn it into an image sequence. The ffmpeg website says that it's input option can take network streams, although I really can't find any clear documentation for it. I've searched through Stack Overflow and I haven't found any answers either.
I've tried adding the link to the stream:
ffmpeg -i www.twitch.tv/channelName
But the program either stated the error "No such file or directory":
or caused a segmentation fault when adding https to the link.
I'm also using streamlink and used that with ffmpeg in a python script to try the streaming url:
import streamlink
import subprocess
streams = streamlink.streams("http://twitch.tv/channelName")
stream = streams["worst"]
fd = stream.open()
url = fd.writer.stream.url
fd.close()
subprocess.run(['/path/to/ffmpeg', '-i', url], shell=True)
But that is producing the same error as the website URL. I'm pretty new to ffmpeg and streamlink so I'm not sure what I'm doing wrong. Is there a way for me to add a twitch stream to the input for ffmpeg?
I've figured it out. Ffmpeg won't pull the files that are online for you, you have to pull them yourself, this can be done by using call GET on the stream url which returns a file containing addresses of .ts files, curl can be used to download these files on your drive. Combine this with my image sequencing goal the process looks like this on python:
import streamlink
import subprocess
import requests
if __name__ == "__main__":
streams = streamlink.streams("http://twitch.tv/twitchplayspokemon")
stream = streams["worst"]
fd = stream.open()
url = fd.writer.stream.url
fd.close()
res = requests.get(url)
tsFiles = list(filter(lambda line: line.startswith('http'), res.text.splitlines()))
print(tsFiles)
for i, ts in enumerate(tsFiles):
vid = 'vid{}.ts'.format(i)
process = subprocess.run(['curl', ts, '-o', vid])
process = subprocess.run(['ffmpeg', '-i', vid, '-vf', 'fps=1', 'out{}_%d.png'.format(i)])
It's not a perfect answer, you still have to create the intermediary video files which I was hoping to avoid. Maybe there's a better and faster answer, but this will suffice.

How to add chapters to ogg file?

I am trying to add chapters to a ogg file containing vorbis audio.
From this link I copied the following ffmpeg command.
ffmpeg -threads auto -y -i in.ogg -i metadata_OGG.txt -map_metadata 1 -codec copy out_METADATA.ogg
My metadata_OGG.txt file is as given below.
CHAPTER00=00:00:00.000
CHAPTER00NAME=Chapter 01
CHAPTER01=00:00:05.000
CHAPTER01NAME=Chapter 02
CHAPTER02=00:00:10.000
CHAPTER02NAME=Chapter 03
I am getting the following error.
[ogg # 00000000006d6900] Unsupported codec id in stream 0
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
But if i change -codec copy to -acodec copy there is no error in ffmpeg but the text file is converted to video. i.e. the output file will have a static video frame with the text of metadata_OGG.txt in it. Also, I observe the following log message during conversion.
Stream #1:0 -> #0:0 (ansi (native) -> theora (libtheora))
Stream #0:0 -> #0:1 (copy)
Anybody please tell me what is going wrong here?
Also, I would like to know what is the right way to add chapters to ogg. I searched for some tools also. I did not get any.
Here is what worked for me using ffmpeg 4.3.1.
I have a metadata file which almost respects ffmpeg's metadata file format:
;FFMETADATA1
title=Evolution theory
[CHAPTER]
TIMEBASE=1/1000
START=0
END=
title=Darwin's point of view
[CHAPTER]
TIMEBASE=1/1000
START=78880
END=
title=Genghis Khan's children
Notice the file format requires an END time, but leaving it empty didn't bother in my case.
Now I add chapter information to my opus/ogg file:
ffmpeg -i darwin.opus.ogg -i darwin_chapters.txt -map_metadata 1 -c copy darwin_withchapters.opus.ogg
Note: if you want to overwrite existing chapter information from the file, you may need to add a -map_chapters 1 parameter in the ffmpeg command line above.
That creates the file darwin_withchapters.opus.ogg. I check if chapter info has really been added to the file:
opusinfo darwin_withchapters.opus.ogg
You would use ogginfo for Ogg/Vorbis files.
And here is the result (I removed a few irrelevant lines):
ENCODER=opusenc from opus-tools 0.1.10
ENCODER_OPTIONS=--bitrate 112
title=Evolution theory
CHAPTER000=00:00:00.000
CHAPTER000NAME=Darwin's point of view
CHAPTER001=00:01:19.880
CHAPTER001NAME=Genghis Khan's children
[...]
Here you go. ffmpeg did the conversion between its metadata file format to the vorbis tag/comment chapter format.
You could also directly write metadata in the Vorbis Chapter Extension format, and use the classic vorbiscomment tool, or other tools which allow editing of opus/ogg in-file tags.
Opus has been mentioned here. I was trying to make opusenc from opus-tools add chapters when encoding and couldn’t find a command line example anywhere. Thanks to the hints in this thread I managed to figure it out, perhaps someone may find it helpful.
opusenc --comment "CHAPTER000=00:00:00.000" --comment "CHAPTER000NAME=Hello" --comment "CHAPTER001=01:23:45.678" --comment "CHAPTER001NAME=World" input.wav output.opus
The chapter key/value scheme is the aforementioned Ogg/Matroska one. Of course, more metadata options like --title, --artist etc. can be added.
Using ffmpeg to add the chapters resulted in two problems for me: The artwork image in the ogg/opus input file was missing in the output file, and ffmpeg rejected empty END chapter times.
I did this on Windows 10 using
opusenc opus-tools 0.2-3-gf5f571b (using libopus 1.3)
ffmpeg version 4.4.1-essentials_build-www.gyan.dev
opusinfo, MPC-HC (64-bit) v1.7.11 and VLC Media Player 3.0.14 Vetinari to confirm.
I found the issue.
For ffmpeg to work, the metadata file should have the following header.
;FFMETADATA1
I followed the steps given in ffmpeg documentation for metadata.
But the issue is not resolved completely.
With the above steps I am able to add metadata to mp4, mkv and other container files but not to ogg files. I am not sure whether ffmpeg supports adding chapters to ogg files.

How to use codec type properly in NPM

Trying to use '-acodec libopus' in my npm project as I use in the command line like in the following format;
ffmpeg -acodec libopus -i 1.webm 1.wav
This works perfectly! But I would like to make it possible in my NPM project.
How can I set the parameters?
This is what I have , but not working. The output file is broken in a way that some of the frames of the audio file are missing. It is like there is sound and then there is not. And vice versa.
var proc = new ffmpeg({
source: file,
nolog: false
});
format = "opus"; // or could be wav as well!
proc.addOptions([
'-f ' + format,
'-acodec libopus',
'-vn'
]);
The purpose is to take audio file from the video file seamlessly.
Without the codec libopus, I get the following errors in the command prompt, so I assume I should handle the same issue in my NPM project as well.
[opus # 00000000006d4520] LBRR frames is not implemented. Update your FFmpeg version to the newest one from Git. If the problem still occurs, it means that your file has a feature which has not been implemented.
[opus # 00000000006d4520] Error decoding a SILK frame.
[opus # 00000000006d4520] Error decoding an Opus frame.
My library is up to date, I just need to use the codec libopus properly.
Any suggestions?
\node-js>ffmpeg -version
ffmpeg version N-86175-g64ea4d1 Copyright (c) 2000-2017 the FFmpeg
developers
built with gcc 6.3.0 (GCC)
Output in command line;
xtranscribe transcodeWatson: file : ./data/that/2.webm
progress 62.625273103421605%
progress 100.01224534515762%
SAVED - transcodeWatson : .mp3
out of transcode!
fileSizeInBytes : 16284033
According to the README, you can add input options to the process:
proc.addInputOption('-acodec libopus');
It matters where you place an option in ffmpeg. If you put it before -i, it applies to that particular input. If you put it before an output file name, it applies to that output.

connection timeout between client and ffserver

I have a directory which contain some files,I loop around this files and stream them using ffmpeg to ffserver,the problem is when song is over,the client stop receiving the stream.VLC and jwplayer have this problem-which I tested-(although I can fix this problem in jwplayer by adding repeat: true option but I don't think it's such a good idea).
what I need is an option or some trick in ffserver which keep connection alive(at least for a while) so that if a song is over,the next song start automatically(it take 1 second to switch songs),maybe ffserver have a timeout option ?
I ended up using concat for streaming files without breaking connection
the easiest way would be to create a file,name it file_paths.txt and add paths to file like this :
file '/path/to/file1'
file '/path/to/file2'
file '/path/to/file3'
and then in your ffmpeg command do something like this :
ffmpeg -re -f concat -i file_paths.txt http:/ip:8090/feed1.ffm
this works really well,although all files must have the same codec and format
for more information and see how to use concat for different formats see this

Resources