ffmpeg can't find codec to cut 10 seconds movie - ffmpeg

I tried to cut 10 seconds from movie and convert to MP4. But sometimes I have a error like below:
Duration: 00:08:52.40, start: 0.000000, bitrate: 1126 kb/s
Stream #0:0: Audio: wmav2 (a[1][0][0] / 0x0161), 44100 Hz, stereo, fltp, 96 kb/s
Stream #0:1: Video: wmv3 (Main) (WMV3 / 0x33564D57), yuv420p, 640x480, 1000 kb/s, SAR 1:1 DAR 4:3, 29.97 tbr, 1k tbn, 1k tbc
[mp4 # 0x5614bbea1300] Could not find tag for codec wmv3 in stream #0, codec not currently supported in container
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argumentStream mapping:
What does this error mean? Should I install some extra codec?
My exec line looks like:
ffmpeg -i input.wmv -ss 00:00:00 -to 0 -c copy 0:00:10 output.mp4

Could not find tag for codec wmv3 in stream #0, codec not currently supported in container
You're trmiing the file without recompressing, and ffmpeg does not write Windows Media 9 streams into MP4, so either recompress:
ffmpeg -i input.wmv -ss 00:00:00 -to 00:00:10 output.mp4
or output to a different container, like Matroska:
ffmpeg -i input.wmv -ss 00:00:00 -to 00:00:10 -c copy output.mkv

Related

ffmpeg convert rtmp audio/video stream to icecast2 audio/video stream

I've been using this command to convert my public rtmp audio/video stream to a local mp3 audio icecast2 stream, but I have been unable to do the same for both video and audio.
[Audio Only] (This works fine)
ffmpeg -re -i rtmp://162.142.xx.xxx:xxx/stream -vn -codec:a libmp3lame -b:a 128k -f mp3 -content_type audio/mpeg icecast://source:password#192.168.1.xxx:80/live
I've tried to re-write in order to support video, but I keep hitting dead ends
[Audio & Video Attempt] (this does not work)
ffmpeg -re -i rtmp://162.142.xx.xxx:xxx/stream -codec:v -f mpeg4 -b:v -f mpeg4 -content_type video/mpeg4 icecast://source:password#192.168.1.xxx:80/live
When I run this command, it gives me the error below asking for a suitable format.
$ ffmpeg -re -i rtmp://162.142.xx.xxx:xxx/stream -codec:v -f mpeg4 -b:v -f mpeg4 -content_type video/mpeg4 icecast://source:password#192.168.1.xxx:80/live
[h264 # 0x5598ffbb8980] co located POCs unavailable
[h264 # 0x5598ffbb8980] mmco: unref short failure
Input #0, flv, from 'rtmp://162.142.xx.xxx:xxx/stream':
Metadata:
|RtmpSampleAccess: true
Server : NGINX RTMP (github.com/sergey-dryabzhinsky/nginx-rtmp-module)
displayWidth : 1280
displayHeight : 720
fps : 48
videokeyframe_frequency: 0
profile :
level :
Duration: 00:00:00.00, start: 28117.779000, bitrate: N/A
Stream #0:0: Audio: aac (LC), 48000 Hz, stereo, fltp, 327 kb/s
Stream #0:1: Video: h264 (High), yuv420p(tv, bt709, progressive), 1280x720 [SAR 1:1 DAR 16:9], 2560 kb/s, 48 fps, 48 tbr, 1k tbn
[NULL # 0x5598ffb8bec0] Unable to find a suitable output format for 'mpeg4'
mpeg4: Invalid argument
I am positive that icecast2 can support video streams, however on the few occasions that I was able to actively stream successfully to it, it only showed an empty video embed.
I've re-written the command for AV multiple times while referencing ffmpeg documentation, however my above attempt seems to be the closest (concept-wise) that I have gotten.
What flags/formatting might I be missing which are causing the stream not to work?

How to trim webm video while preserving transparency

I want to trim a transparent webm video using ffmpeg. Here's the ffprobe result for that video:
Input #0, matroska,webm, from 'template.webm':
Metadata:
ENCODER : Lavf58.29.100
Duration: 00:00:05.24, start: -0.002000, bitrate: 2856 kb/s
Stream #0:0: Video: vp8, yuv420p(progressive), 1573x900, SAR 1:1 DAR 1573:900, 30 fps, 30 tbr, 1k tbn, 1k tbc (default)
Metadata:
ALPHA_MODE : 1
ENCODER : Lavc58.54.100 libvpx
DURATION : 00:00:05.240000000
Stream #0:1: Audio: opus, 48000 Hz, mono, fltp
Metadata:
ENCODER : Lavc58.54.100 libopus
DURATION : 00:00:05.241000000
I tried
ffmpeg -i template.webm -ss 1 -to 3 -c copy trimmed.webm
but the trimmed video doesn't start (or sometimes end) at the exact times defined in the command so I tried re-encoding the video using libvpx
ffmpeg -i template.webm -ss 1 -to 3 -c:v libvpx -c:a copy -crf 30 -b:v 0 trimmed.webm
It solved the timing issue but this results in loss of transparency of output video. Here's the ffprobe:
Input #0, matroska,webm, from 'trimmed.webm':
Metadata:
ENCODER : Lavf57.83.100
Duration: 00:00:02.00, start: -0.001000, bitrate: 1395 kb/s
Stream #0:0: Video: vp8, yuv420p(progressive), 1573x900, SAR 1:1 DAR 1573:900, 30 fps, 30 tbr, 1k tbn, 1k tbc (default)
Metadata:
ALPHA_MODE : 1
ENCODER : Lavc57.107.100 libvpx
DURATION : 00:00:02.000000000
Stream #0:1: Audio: opus, 48000 Hz, mono, fltp
Metadata:
ENCODER : Lavc58.54.100 libopus
DURATION : 00:00:02.001000000
How should I trim the video while preserving the transparency? Moreover, a fast solution will be extremely helpful.
The native, built-in FFmpeg VP8 decoder does not yet support alpha/transparency. Use libvpx to decode:
ffmpeg -c:v libvpx -i template.webm -ss 1 -to 3 -c:v libvpx -c:a copy -crf 30 -b:v 0 trimmed.webm
If you get Transparency encoding with auto_alt_ref does not work error then add the -auto-alt-ref 0 output option or change -c:v libvpx output option to -c:v libvpx-vp9.

FFMPEG - concatenating mp4s from different sources - unable to stop "Non-monotonous DTS in output stream" warning

I need to concatenate mp4 files from different sources, this means some of the variables are out of my control such as timebase, aspect ratio and encoding. So to get around this I re-encode and attempt to standardise the files before concatenating them. Unfortunately, despite this I get Non-monotonous DTS in output stream warnings during the concatenation stage, and the output video seems to always have broken audio/video syncing by the last segment.
I know there are a lot of other questions out there about resolving the warning above, but I've been through them all and reviewed the documentation.. but unfortunately I've been still been unable to solve it..
I think the thing which I don't understand is: if I have mp4s from different sources, what exactly do I need to do to ensure that the files will always neatly concatenate together?
What I've tried so far
The script I'm using to standardise the mp4 files before concantenation is the following (amends resolution, frame rate, timebase, bitrate for audio, bitrate for video, audio encoding and video encoding):
ffmpeg -y -i $1 -vf 'scale=1280:720:force_original_aspect_ratio=1,pad=1280:720:(ow-iw)/2:(oh-ih)/2' -r 30 -video_track_timescale 90000 -b:a 128K -b:v 1200K -c:a aac -c:v libx264 $2
Here's the ffprobe output on two of the files, there are some differences but I'm not sure if they are significant?
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'intro.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf58.12.100
Duration: 00:00:08.98, start: 0.000000, bitrate: 1210 kb/s
Stream #0:0(eng): Video: h264 (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 1069 kb/s, 30 fps, 30 tbr, 90k tbn, 60 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(eng): Audio: aac (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 132 kb/s (default)
Metadata:
handler_name : SoundHandler
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'middle.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf58.12.100
Duration: 00:00:59.72, start: 0.000000, bitrate: 1200 kb/s
Stream #0:0(und): Video: h264 (avc1 / 0x31637661), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 1063 kb/s, 30 fps, 30 tbr, 90k tbn, 60 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
Metadata:
handler_name : SoundHandler
They all have normal video and audio at this point.
After that I concatenate them and add a watermark using the following (it sucks that I need to re-encode here):
ffmpeg -y \
-f concat \
-safe 0 \
-i $INFILES \
-c:v libx264 \
-c:a copy \
-preset fast \
-vf drawtext=enable="'between(t, $DRAW_TEXT_DELAY, $DRAW_TEXT_DURATION)': fontfile=$FONT_DIR/$FONT: text='$TEXT': fontcolor=$FONTCOLOR: fontsize=$FONTSIZE: $POSITION" \
$OUTFILE
INFILES is a path to a text file formatted like:
file /usr/src/app/data/test/out/intro.mp4
file /usr/src/app/data/test/out/middle.mp4
file /usr/src/app/data/test/out/outro.mp4
What am I missing here? Is there a way to debug this further?
Your audio streams have distinct sampling rates, and may have distinct channel count as well. Also, compressed MPEG audio streams will introduce slight async upon concat.
Use
ffmpeg -y -i $1 -vf 'scale=1280:720:force_original_aspect_ratio=1,pad=1280:720:(ow-iw)/2:(oh-ih)/2,setsar=1,format=yuv420p' -r 30 -c:v libx264 -b:v 1200K -ac 2 -ar 48000 -c:a pcm_s16le -video_track_timescale 90000 $2
to standardize, but save to MOV.
Then during concat, change -c:a copy to -c:a aac.
There are three methods to concatenate files in FFmpeg.
Demuxer (You are using this)
This method can be used to concat files with the same paramters, like codecs, size, PAR, etc.
$ ffmpeg -concat -i files.txt [...] output.mp4
Protocol
Same as the first one, but on top of that, this method is useful for files that can be copied together bitwise - it doesn't involves re-encoding (some formats support this, like MpegTS or some lossless formats).
$ ffmpeg -i "concat:FILE_0| ... |FILE_N" [...] output.mp4
Filter
If you have videos with different codecs, you have to use this method:
$ ffmpeg -i <FILE_0> ... -i <FILE_N> [...] -filter_complex "[0:0][0:1]...[<N>:0][<N>:1] concat=n=<N>:v=1:a=1[v_out][a_out]" -map [v_out] -map [a_out] output.mp4
The concat filter decodes the video and reencodes it with the same parameters. It also takes care of the audio streams. I'm not entirely sure what does it do if the resolutions are different, but this should be a good start.

duration change after transcode ts

i have a problem about transcode with ffmpeg
i want to cover m3u8 to mp4, so i transcode every ts file first, and then concat them to a mp4, but i found that the duration will be bigger than source file.
source file is :
http://oc7iy3eta.bkt.clouddn.com/src_20.ts
after transcode, test file is:
http://oc7iy3eta.bkt.clouddn.com/test_20.ts
i use the command as bellow to change to 5fps, and 400k bitrate:
sudo ffmpeg -analyzeduration 2147483647 -probesize 2147483647 -nostdin -y -v warning -i ./src_20.ts -threads 3 -movflags faststart -metadata:s:v rotate=0 -chunk_duration 520000 -video_track_timescale 25000 -pix_fmt yuv420p -copytb 1 -vcodec libx264 -b:v 400000 -minrate 400000 -maxrate 400000 -bufsize 500k -force_key_frames "expr:gte(t,n_forced*2)" -vsync 1 -r 5 -s 544*960 -acodec libfaac -async 1 ./test_20.ts
i use ffprobe command to see video info:
source file info:
Duration: 00:00:01.26, start: 28.346989, bitrate: 921 kb/s
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0[0x100]: Audio: aac ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp, 23 kb/s
Stream #0:1[0x101]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p, 544x960, 10.67 tbr, 90k tbn, 180k tbc
test file:
Input #0, mpegts, from 'test_20.ts':
Duration: 00:00:01.62, start: 1.576778, bitrate: 447 kb/s
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0[0x100]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p, 544x960, 5 fps, 5 tbr, 90k tbn, 10 tbc
Stream #0:1[0x101]: Audio: aac ([15][0][0][0] / 0x000F), 44100 Hz, stereo, fltp, 5 kb/s
=======================================================================
question
so , we can see that the duration of src file is 1.26s , but after transcode, the test file is 1.62s.
why? can anybody help
I suggest you save the m3u8 to a single TS and then transcode that to MP4.
ffmpeg -i in.m3u8 -c copy src.ts
Your current command is transcoding each TS to CFR at half the rate but your source timestamps have some jitter, so due to PTS quantization, there will be a mismatch. A single file transcode will minimize it.

FFMPEG add text frames to the start of video

I have some videos either in mp4 or webm format, and I'd like to use ffmpeg to add 4 seconds to the start of each video to display some text in the center with no sound.
Some other requirements:
try to avoid re-encoding the video
need to maintain the quality (resolution, bitrate, etc)
(optional) to make the text fade in/out
I am new to ffmpeg and any help will be appreciated.
thanks in advance
Example ffprobe information for mp4 below:
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'input.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf55.33.100
Duration: 00:00:03.84, start: 0.042667, bitrate: 1117 kb/s
Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1280x720, 1021 kb/s, 30 fps, 30 tbr, 15360 tbn, 60 tbc (default)
Metadata:
handler_name : VideoHandler
Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 140 kb/s (default)
Metadata:
handler_name : SoundHandler
Example webm
Input #0, matroska,webm, from 'input.webm':
Metadata:
encoder : Lavf55.33.100
Duration: 00:00:03.80, start: 0.000000, bitrate: 1060 kb/s
Stream #0:0(eng): Video: vp8, yuv420p, 1280x720, SAR 1:1 DAR 16:9, 30 fps, 30 tbr, 1k tbn, 1k tbc (default)
Stream #0:1(eng): Audio: vorbis, 48000 Hz, stereo, fltp (default)
Screenshot from joined.mp4
Screenshot for step 3 console
You'll have to generate a 4 second video with dummy audio matching the parameters of the existing video, including timebase, and then use the concat demuxer with streamcopy.
For the sample files shown in Q:
Step 1 Generate text video
ffmpeg -f lavfi -r 30 -i color=black:1280x720 -f lavfi -i anullsrc -vf "drawtext=fontfile='/path/to/font.ttf':fontcolor=FFFFFF:fontsize=50:text='Your text':x='(main_w-text_w)/2':y='(main_h-text_h)/2',fade=t=in:st=0:d=1,fade=t=out:st=3:d=1" -c:v libx264 -b:v 1000k -pix_fmt yuv420p -video_track_timescale 15360 -c:a aac -ar 48000 -ac 2 -sample_fmt fltp -t 4 intro.mp4
For WebM, replace -c:v libx264 with -c:v libvpx, -c:a aac with -c:a libvorbis and intro.mp4 with intro.webm. You may remove the -video_track_timescale 15360 since WebMs tend to use a single timescale, that I've seen.
Step 2 Prepare concat file, say, list.txt
file 'intro.mp4'
file 'input.mp4'
Step 3 Concat
ffmpeg -f concat -i list.txt -c copy -fflags +genpts joined.mp4
The variables important here are video size 1280x720, frame rate -r 30, -pix_fmt yuv420p, sample rate -ar 48000, format -sample_fmt fltp, channel layout -ac 2 and of course, codecs.
Short answer is that you cannot encode new data as mp4 or webm and insert it at the front of the video stream. Those formats simply do not work like that. Both of these encoding formats are lossy, so if you decode and encode them again then additional information will be lost/changed by the second encoding. You could do something else, but what you are trying to do will not work.

Resources