I am using the following command:
ffmpeg
-i "video1a.flv"
-i "video1b.flv"
-i "video1c.flv"
-i "video2a.flv"
-i "video3a.flv"
-i "video4a.flv"
-i "video4b.flv"
-i "video4c.flv"
-i "video4d.flv"
-i "video4e.flv"
-filter_complex
nullsrc=size=640x480[base];
[0:v]setpts=PTS-STARTPTS+0.12/TB,scale=320x240[1a];
[1:v]setpts=PTS-STARTPTS+3469.115/TB,scale=320x240[1b];
[2:v]setpts=PTS-STARTPTS+7739.299/TB,scale=320x240[1c];
[5:v]setpts=PTS-STARTPTS+4390.466/TB,scale=320x240[4a];
[6:v]setpts=PTS-STARTPTS+6803.937/TB,scale=320x240[4b];
[7:v]setpts=PTS-STARTPTS+8242.005/TB,scale=320x240[4c];
[8:v]setpts=PTS-STARTPTS+9811.577/TB,scale=320x240[4d];
[9:v]setpts=PTS-STARTPTS+10765.19/TB,scale=320x240[4e];
[base][1a]overlay=eof_action=pass[o1];
[o1][1b]overlay=eof_action=pass[o1];
[o1][1c]overlay=eof_action=pass:shortest=1[o1];
[o1][4a]overlay=eof_action=pass:x=320:y=240[o4];
[o4][4b]overlay=eof_action=pass:x=320:y=240[o4];
[o4][4c]overlay=eof_action=pass:x=320:y=240[o4];
[o4][4d]overlay=eof_action=pass:x=320:y=240[o4];
[o4][4e]overlay=eof_action=pass:x=320:y=240;
[0:a]asetpts=PTS-STARTPTS+0.12/TB,aresample=async=1,pan=1c|c0=c0,apad[a1a];
[1:a]asetpts=PTS-STARTPTS+3469.115/TB,aresample=async=1,pan=1c|c0=c0,apad[a1b];
[2:a]asetpts=PTS-STARTPTS+7739.299/TB,aresample=async=1,pan=1c|c0=c0[a1c];
[3:a]asetpts=PTS-STARTPTS+82.55/TB,aresample=async=1,pan=1c|c0=c0,apad[a2a];
[4:a]asetpts=PTS-STARTPTS+2687.265/TB,aresample=async=1,pan=1c|c0=c0,apad[a3a];
[a1a][a1b][a1c][a2a][a3a]amerge=inputs=5
-c:v libx264 -c:a aac -ac 2 output.mp4
This is the stream data from ffmpeg:
Input #0
Stream #0:0: Video: vp6f, yuv420p, 160x128, 1k tbr, 1k tbn
Stream #0:1: Audio: nellymoser, 11025 Hz, mono, flt
Input #1
Stream #1:0: Audio: nellymoser, 11025 Hz, mono, flt
Stream #1:1: Video: vp6f, yuv420p, 160x128, 1k tbr, 1k tbn
Input #2
Stream #2:0: Audio: nellymoser, 11025 Hz, mono, flt
Stream #2:1: Video: vp6f, yuv420p, 160x128, 1k tbr, 1k tbn
Input #3
Stream #3:0: Audio: nellymoser, 11025 Hz, mono, flt
Input #4
Stream #4:0: Audio: nellymoser, 11025 Hz, mono, flt
Input #5
Stream #5:0: Video: vp6f, yuv420p, 1680x1056, 1k tbr, 1k tbn
Input #6
Stream #6:0: Video: vp6f, yuv420p, 1680x1056, 1k tbr, 1k tbn
Input #7
Stream #7:0: Video: vp6f, yuv420p, 1680x1056, 1k tbr, 1k tbn
Input #8
Stream #8:0: Video: vp6f, yuv420p, 1680x1056, 1k tbr, 1k tbn
Input #9
Stream #9:0: Video: vp6f, yuv420p, 1680x1056, 1k tbr, 1k tbn
Stream mapping:
Stream #0:0 (vp6f) -> setpts
Stream #0:1 (nellymoser) -> asetpts
Stream #1:0 (nellymoser) -> asetpts
Stream #1:1 (vp6f) -> setpts
Stream #2:0 (nellymoser) -> asetpts
Stream #2:1 (vp6f) -> setpts
Stream #3:0 (nellymoser) -> asetpts
Stream #4:0 (nellymoser) -> asetpts
Stream #5:0 (vp6f) -> setpts
Stream #6:0 (vp6f) -> setpts
Stream #7:0 (vp6f) -> setpts
Stream #8:0 (vp6f) -> setpts
Stream #9:0 (vp6f) -> setpts
overlay -> Stream #0:0 (libx264)
amerge -> Stream #0:1 (aac)
This is the error:
Press [q] to stop, [?] for help
Enter command: <target>|all <time>|-1 <command>[ <argument>]
Parse error, at least 3 arguments were expected, only 1 given in string 'ho Oscar'
[Parsed_amerge_44 # 0a7238c0] No channel layout for input 1
[Parsed_amerge_44 # 0a7238c0] Input channel layouts overlap: output layout will be determined by the number of distinct input channels
[Parsed_pan_27 # 07681880] Pure channel mapping detected: 0
[Parsed_pan_31 # 07681b40] Pure channel mapping detected: 0
[Parsed_pan_35 # 0a7232c0] Pure channel mapping detected: 0
[Parsed_pan_38 # 0a7234c0] Pure channel mapping detected: 0
[Parsed_pan_42 # 0a723740] Pure channel mapping detected: 0
[libx264 # 069e8a40] using SAR=1/1
[libx264 # 069e8a40] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 # 069e8a40] profile High, level 3.0
[libx264 # 069e8a40] 264 - core 155 r2901 7d0ff22 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=15 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'output.mp4':
Metadata:
canSeekToEnd : false
encoder : Lavf58.16.100
Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p(progressive), 640x480 [SAR 1:1 DAR 4:3], q=-1--1, 25 fps, 12800 tbn, 25 tbc (default)
Metadata:
encoder : Lavc58.19.102 libx264
Side data:
cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
Stream #0:1: Audio: aac (LC) (mp4a / 0x6134706D), 11025 Hz, stereo, fltp, 128 kb/s (default)
Metadata:
encoder : Lavc58.19.102 aac
frame= 200 fps=0.0 q=28.0 size= 0kB time=00:00:07.82 bitrate= 0.0kbits/s speed=15.6x
...
frame=30132 fps=497 q=28.0 size= 29952kB time=00:20:05.14 bitrate= 203.6kbits/s speed=19.9x
Error while filtering: Cannot allocate memory
Failed to inject frame into filter network: Cannot allocate memory
Error while processing the decoded data for stream #2:1
[libx264 # 069e8a40] frame I:121 Avg QP: 8.83 size: 7052
[libx264 # 069e8a40] frame P:7609 Avg QP:18.33 size: 1527
[libx264 # 069e8a40] frame B:22367 Avg QP:25.44 size: 112
[libx264 # 069e8a40] consecutive B-frames: 0.6% 0.7% 1.0% 97.8%
[libx264 # 069e8a40] mb I I16..4: 75.7% 18.3% 6.0%
[libx264 # 069e8a40] mb P I16..4: 0.3% 0.7% 0.1% P16..4: 10.6% 3.3% 1.6% 0.0% 0.0% skip:83.4%
[libx264 # 069e8a40] mb B I16..4: 0.0% 0.0% 0.0% B16..8: 3.2% 0.2% 0.0% direct: 0.2% skip:96.5% L0:47.7% L1:48.2% BI: 4.0%
[libx264 # 069e8a40] 8x8 transform intra:37.4% inter:70.2%
[libx264 # 069e8a40] coded y,uvDC,uvAC intra: 38.9% 46.1% 28.7% inter: 1.7% 3.3% 0.1%
[libx264 # 069e8a40] i16 v,h,dc,p: 78% 8% 4% 10%
[libx264 # 069e8a40] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 33% 20% 12% 3% 6% 8% 6% 6% 7%
[libx264 # 069e8a40] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 37% 22% 9% 4% 6% 7% 5% 5% 4%
[libx264 # 069e8a40] i8c dc,h,v,p: 60% 16% 17% 7%
[libx264 # 069e8a40] Weighted P-Frames: Y:0.7% UV:0.6%
[libx264 # 069e8a40] ref P L0: 65.5% 12.3% 14.2% 8.0% 0.0%
[libx264 # 069e8a40] ref B L0: 90.2% 7.5% 2.3%
[libx264 # 069e8a40] ref B L1: 96.4% 3.6%
[libx264 # 069e8a40] kb/s:99.58
[aac # 069e9600] Qavg: 65519.982
[aac # 069e9600] 2 frames left in the queue on closing
Conversion failed!
I am trying to figure out how to fix these errors:
Error while filtering: Cannot allocate memory
Failed to inject frame into filter network: Cannot allocate memory
Error while processing the decoded data for stream #2:1
Observation #1
If I run the following command on stream #2:1 by itself:
ffmpeg -i video1c.flv -vcodec libx264 -acodec aac video1c.mp4
The files is converted fine with no errors.
Observation #2
Running MediaInfo on video1c.flv (stream #2) shows the following:
Format: Flash Video
Video Codecs: On2 VP6
Audio Codecs: Nellymoser
Any help would be appreciated in resolving this error.
Update #1
I have tried splitting the filter graph into two as requested but I receive the same errors:
Error while filtering: Cannot allocate memory
Failed to inject frame into filter network: Cannot allocate memory
Error while processing the decoded data for stream #1:1
However, I did discover something, if I try to bring up stream #1:1 mentioned above (video1b.flv) using VLC Media Player, I can hear the audio file but I cannot see the video and I receive this error message:
No suitable decoder module:
VLC Does not support the audio or video format "undf".
Unfortunately there is no way for you to fix this.
Update #2
The above error was with the 32bit version of ffmpeg. I switched to a 64 bit machine and am now running the 64 bit ffmpeg version ffmpeg-20180605-b748772-win64-static.
Now I no longer receive the following error:
Error while processing the decoded data for stream #1:1
But, I have a new error. About an hour into running it, I receive the following error:
av_interleaved_write_frame(): Cannot allocate memory
[mp4 # 000000000433f080] Application provided duration: 3327365388930198318
/ timestamp: 17178820096 is out of range for mov/mp4 format
I also tried first remuxing all the files as suggested and using those files to run the above command and that did not help. I still get the same error.
Try with audio and video in different filtergraphs
ffmpeg
-i "video1a.flv"
-i "video1b.flv"
-i "video1c.flv"
-i "video2a.flv"
-i "video3a.flv"
-i "video4a.flv"
-i "video4b.flv"
-i "video4c.flv"
-i "video4d.flv"
-i "video4e.flv"
-filter_complex
nullsrc=size=640x480[base];
[0:v]setpts=PTS-STARTPTS+0.12/TB,scale=320x240[1a];
[1:v]setpts=PTS-STARTPTS+3469.115/TB,scale=320x240[1b];
[2:v]setpts=PTS-STARTPTS+7739.299/TB,scale=320x240[1c];
[5:v]setpts=PTS-STARTPTS+4390.466/TB,scale=320x240[4a];
[6:v]setpts=PTS-STARTPTS+6803.937/TB,scale=320x240[4b];
[7:v]setpts=PTS-STARTPTS+8242.005/TB,scale=320x240[4c];
[8:v]setpts=PTS-STARTPTS+9811.577/TB,scale=320x240[4d];
[9:v]setpts=PTS-STARTPTS+10765.19/TB,scale=320x240[4e];
[base][1a]overlay=eof_action=pass[o1];
[o1][1b]overlay=eof_action=pass[o1];
[o1][1c]overlay=eof_action=pass:shortest=1[o1];
[o1][4a]overlay=eof_action=pass:x=320:y=240[o4];
[o4][4b]overlay=eof_action=pass:x=320:y=240[o4];
[o4][4c]overlay=eof_action=pass:x=320:y=240[o4];
[o4][4d]overlay=eof_action=pass:x=320:y=240[o4];
[o4][4e]overlay=eof_action=pass:x=320:y=240
-filter_complex
[0:a]asetpts=PTS-STARTPTS+0.12/TB,aresample=async=1,pan=1c|c0=c0,apad[a1a];
[1:a]asetpts=PTS-STARTPTS+3469.115/TB,aresample=async=1,pan=1c|c0=c0,apad[a1b];
[2:a]asetpts=PTS-STARTPTS+7739.299/TB,aresample=async=1,pan=1c|c0=c0[a1c];
[3:a]asetpts=PTS-STARTPTS+82.55/TB,aresample=async=1,pan=1c|c0=c0,apad[a2a];
[4:a]asetpts=PTS-STARTPTS+2687.265/TB,aresample=async=1,pan=1c|c0=c0,apad[a3a];
[a1a][a1b][a1c][a2a][a3a]amerge=inputs=5
-c:v libx264 -c:a aac -ac 2 output.mp4
Related
Context
I have a process flow that may output either H.264 Annex B streams, variably-spaced JPEGs, or a mixture of two. By variably-spaced I mean where elapsed time between any two adjacent JPEGs may (and likely to be) different from any other two adjacent JPEGs. So an example of possible inputs are:
stream1.h264
{Set of JPEGs}
stream1.h264 + stream2.h264
stream1.h264 + {Set of JPEGs}
stream1.h264 + {Set of JPEGs} + stream2.h264
stream1.h264 + {Set of JPEGs} + stream2.h264 + {Set of JPEGs} + ...
stream1.h264 + stream2.h264 + {Set of JPEGs} + ...
The output needs to be a single stitched (i.e. concatenated) output in MPEG-4 container.
Requirements: No re-encoding or transcoding of existing video compression (One time conversion of JPEG sets to video format is OKay).
Solution Prototype
To prototype the solution I have found that ffmpeg has concat demuxer that would let me specify an ordered sequence of inputs that ffmpeg would then concatenate together, but all inputs must be of the same format. So, to meet that requirement, I:
Convert every JPEG set to an .mp4 using concat (and using delay # directive to specify time-spacing between each JPEG)
Convert every .h264 to .mp4 using -c copy to avoid transcoding.
Stitch all generated interim .mp4 files into the single final .mp4 using -f concat and -c copy.
Here's the bash script, in parts, that performs the above:
Ignore the curl comment; that's from originally generating a 100 jpeg images with numbers and these are simply saved locally. What the loop does is it generates concat input file with file sequence#.jpeg directives and duration # directive where each successive JPEG delay is incremented by 0.1 seconds (0.1 between first and second, 0.2 b/w 2nd and 3rd, 0.3 b/w 3rd and 4th, and so on). Then it runs ffmpeg command to convert the set of JPEGs to .mp4 interim file.
echo "ffconcat version 1.0" >ffconcat-jpeg.txt
echo >>ffconcat-jpeg.txt
for i in {1..100}
do
echo "file $i.jpeg" >>ffconcat-jpeg.txt
d=$(echo "$i" | awk '{printf "%f", $1 / 10}')
# d=$(echo "scale=2; $i/10" | bc)
echo "duration $d" >>ffconcat-jpeg.txt
echo "" >>ffconcat-jpeg.txt
# curl -o "$i.jpeg" "https://math.tools/equation/get_equaimages?equation=$i&fontsize=256"
done
ffmpeg \
-hide_banner \
-vsync vfr \
-f concat \
-i ffconcat-jpeg.txt \
-r 30 \
-video_track_timescale 90000 \
video-jpeg.mp4
Convert two streams from .h264 to .mp4 via copy (no transcoding).
ffmpeg \
-hide_banner \
-i low-motion-video.h264 \
-c copy \
-vsync vfr \
-video_track_timescale 90000 \
low-motion-video.mp4
ffmpeg \
-hide_banner \
-i full-video.h264 \
-c copy \
-video_track_timescale 90000 \
-vsync vfr \
full-video.mp4
Stitch all together by generating another concat directive file.
echo "ffconcat version 1.0" >ffconcat-h264.txt
echo >>ffconcat-h264.txt
echo "file low-motion-video.mp4" >>ffconcat-h264.txt
echo >>ffconcat-h264.txt
echo "file full-video.mp4" >>ffconcat-h264.txt
echo >>ffconcat-h264.txt
echo "file video-jpeg.mp4" >>ffconcat-h264.txt
echo >>ffconcat-h264.txt
ffmpeg \
-hide_banner \
-f concat \
-i ffconcat-h264.txt \
-pix_fmt yuv420p \
-c copy \
-video_track_timescale 90000 \
-vsync vfr \
video-out.mp4
Problem (and attempted troubleshooting)
The above does produce a reasonable output -- it plays first video, then plays second video with no timing/rate issues AFAICT, then plays JPEGs with time between each JPEG "frame" growing successively, as expected.
But, the conversion process produces warnings that concern me (for compatibility with players; or potentially other IRL streams that may result in some issue my prototyping content doesn't make obvious). Initial attempts generated 100s of warnings, but with some arguments added, I reduced it down to just a handful, but this handful is stubborn and nothing I tried would help.
The first conversion of JPEGs to .mp4 goes fine with the following output:
Input #0, concat, from 'ffconcat-jpeg.txt':
Duration: 00:08:25.00, start: 0.000000, bitrate: 0 kb/s
Stream #0:0: Video: png, pal8(pc), 176x341 [SAR 3780:3780 DAR 16:31], 25 fps, 25 tbr, 25 tbn, 25 tbc
Stream mapping:
Stream #0:0 -> #0:0 (png (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[libx264 # 0x7fe418008e00] using SAR=1/1
[libx264 # 0x7fe418008e00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2
[libx264 # 0x7fe418008e00] profile High 4:4:4 Predictive, level 1.3, 4:4:4, 8-bit
[libx264 # 0x7fe418008e00] 264 - core 163 r3060 5db6aa6 - H.264/MPEG-4 AVC codec - Copyleft 2003-2021 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=4 threads=11 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'video-jpeg.mp4':
Metadata:
encoder : Lavf58.76.100
Stream #0:0: Video: h264 (avc1 / 0x31637661), yuv444p(tv, progressive), 176x341 [SAR 1:1 DAR 16:31], q=2-31, 30 fps, 90k tbn
Metadata:
encoder : Lavc58.134.100 libx264
Side data:
cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A
frame= 100 fps=0.0 q=-1.0 Lsize= 157kB time=00:07:55.33 bitrate= 2.7kbits/s speed=2.41e+03x
video:155kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 1.800846%
[libx264 # 0x7fe418008e00] frame I:1 Avg QP:20.88 size: 574
[libx264 # 0x7fe418008e00] frame P:43 Avg QP:14.96 size: 2005
[libx264 # 0x7fe418008e00] frame B:56 Avg QP:21.45 size: 1266
[libx264 # 0x7fe418008e00] consecutive B-frames: 14.0% 24.0% 30.0% 32.0%
[libx264 # 0x7fe418008e00] mb I I16..4: 36.4% 55.8% 7.9%
[libx264 # 0x7fe418008e00] mb P I16..4: 5.1% 7.5% 11.2% P16..4: 5.6% 8.1% 4.5% 0.0% 0.0% skip:57.9%
[libx264 # 0x7fe418008e00] mb B I16..4: 2.4% 0.9% 3.9% B16..8: 16.2% 8.8% 4.6% direct: 1.2% skip:62.0% L0:56.6% L1:38.7% BI: 4.7%
[libx264 # 0x7fe418008e00] 8x8 transform intra:28.3% inter:3.7%
[libx264 # 0x7fe418008e00] coded y,u,v intra: 26.5% 0.0% 0.0% inter: 9.8% 0.0% 0.0%
[libx264 # 0x7fe418008e00] i16 v,h,dc,p: 82% 13% 4% 0%
[libx264 # 0x7fe418008e00] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 20% 8% 71% 1% 0% 0% 0% 0% 0%
[libx264 # 0x7fe418008e00] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 41% 11% 29% 4% 2% 3% 1% 7% 1%
[libx264 # 0x7fe418008e00] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 # 0x7fe418008e00] ref P L0: 44.1% 4.2% 28.4% 23.3%
[libx264 # 0x7fe418008e00] ref B L0: 56.2% 32.1% 11.6%
[libx264 # 0x7fe418008e00] ref B L1: 92.4% 7.6%
[libx264 # 0x7fe418008e00] kb/s:2.50
The conversion of individual streams from .h264 to .mp4 generates two types of warnings each. One is [mp4 # 0x7faee3040400] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly, and the other is [mp4 # 0x7faee3040400] pts has no value.
Some posts on SO (can't find my original finds on that now) suggested that it's safe to ignore and comes from H.264 being an elementary stream that supposedly doesn't contain timestamps. It surprises me a bit since I produce that stream using NVENC API and clearly supply timing information for each frame via PIC_PARAMS structure: NV_STRUCT(PIC_PARAMS, pp); ...; pp.inputTimeStamp = _frameIndex++ * (H264_CLOCK_RATE / _params.frameRate);, where #define H264_CLOCK_RATE 9000 and _params.frameRate = 30.
Input #0, h264, from 'low-motion-video.h264':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: h264 (High), yuv420p(progressive), 1440x3040 [SAR 1:1 DAR 9:19], 30 fps, 30 tbr, 1200k tbn, 60 tbc
Output #0, mp4, to 'low-motion-video.mp4':
Metadata:
encoder : Lavf58.76.100
Stream #0:0: Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 1440x3040 [SAR 1:1 DAR 9:19], q=2-31, 30 fps, 30 tbr, 90k tbn, 1200k tbc
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
[mp4 # 0x7faee3040400] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly
[mp4 # 0x7faee3040400] pts has no value
[mp4 # 0x7faee3040400] pts has no value0kB time=-00:00:00.03 bitrate=N/A speed=N/A
Last message repeated 17985 times
frame=17987 fps=0.0 q=-1.0 Lsize= 79332kB time=00:09:59.50 bitrate=1084.0kbits/s speed=1.59e+03x
video:79250kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.103804%
Input #0, h264, from 'full-video.h264':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: h264 (High), yuv420p(progressive), 1440x3040 [SAR 1:1 DAR 9:19], 30 fps, 30 tbr, 1200k tbn, 60 tbc
Output #0, mp4, to 'full-video.mp4':
Metadata:
encoder : Lavf58.76.100
Stream #0:0: Video: h264 (High) (avc1 / 0x31637661), yuv420p(progressive), 1440x3040 [SAR 1:1 DAR 9:19], q=2-31, 30 fps, 30 tbr, 90k tbn, 1200k tbc
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
[mp4 # 0x7f9381864600] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly
[mp4 # 0x7f9381864600] pts has no value
[mp4 # 0x7f9381864600] pts has no value0kB time=-00:00:00.03 bitrate=N/A speed=N/A
Last message repeated 17981 times
frame=17983 fps=0.0 q=-1.0 Lsize= 52976kB time=00:09:59.36 bitrate= 724.1kbits/s speed=1.33e+03x
video:52893kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.156232%
But the most worrisome error for me is from stitching together all interim .mp4 files into one:
[mov,mp4,m4a,3gp,3g2,mj2 # 0x7f9ff2010e00] Auto-inserting h264_mp4toannexb bitstream filter
Input #0, concat, from 'ffconcat-h264.txt':
Duration: N/A, bitrate: 1082 kb/s
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1440x3040 [SAR 1:1 DAR 9:19], 1082 kb/s, 30 fps, 30 tbr, 90k tbn, 60 tbc
Metadata:
handler_name : VideoHandler
vendor_id : [0][0][0][0]
Output #0, mp4, to 'video-out.mp4':
Metadata:
encoder : Lavf58.76.100
Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1440x3040 [SAR 1:1 DAR 9:19], q=2-31, 1082 kb/s, 30 fps, 30 tbr, 90k tbn, 90k tbc
Metadata:
handler_name : VideoHandler
vendor_id : [0][0][0][0]
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
[mov,mp4,m4a,3gp,3g2,mj2 # 0x7f9fe1009c00] Auto-inserting h264_mp4toannexb bitstream filter
[mp4 # 0x7f9ff2023400] Non-monotonous DTS in output stream 0:0; previous: 53954460, current: 53954460; changing to 53954461. This may result in incorrect timestamps in the output file.
[mov,mp4,m4a,3gp,3g2,mj2 # 0x7f9fd1008a00] Auto-inserting h264_mp4toannexb bitstream filter
[mp4 # 0x7f9ff2023400] Non-monotonous DTS in output stream 0:0; previous: 107900521, current: 107874150; changing to 107900522. This may result in incorrect timestamps in the output file.
[mp4 # 0x7f9ff2023400] Non-monotonous DTS in output stream 0:0; previous: 107900522, current: 107886150; changing to 107900523. This may result in incorrect timestamps in the output file.
frame=36070 fps=0.0 q=-1.0 Lsize= 132464kB time=00:27:54.26 bitrate= 648.1kbits/s speed=6.54e+03x
video:132296kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.126409%
I'm not sure how to deal with those non-monotonous DTS errors, and no matter what I try, nothing budges. I analyzed the interim .mp4 files using ffprobe -show_frames and found that the last frame of each interim .mp4 does not have DTS, while previous frames do. E.g.:
...
[FRAME]
media_type=video
stream_index=0
key_frame=0
pkt_pts=53942461
pkt_pts_time=599.360678
pkt_dts=53942461
pkt_dts_time=599.360678
best_effort_timestamp=53942461
best_effort_timestamp_time=599.360678
pkt_duration=3600
pkt_duration_time=0.040000
pkt_pos=54161377
pkt_size=1034
width=1440
height=3040
pix_fmt=yuv420p
sample_aspect_ratio=1:1
pict_type=B
coded_picture_number=17982
display_picture_number=0
interlaced_frame=0
top_field_first=0
repeat_pict=0
color_range=unknown
color_space=unknown
color_primaries=unknown
color_transfer=unknown
chroma_location=left
[/FRAME]
[FRAME]
media_type=video
stream_index=0
key_frame=0
pkt_pts=53927461
pkt_pts_time=599.194011
pkt_dts=N/A
pkt_dts_time=N/A
best_effort_timestamp=53927461
...
My guess is that as concat demuxer reads in (or elsewhere in ffmpeg's conversion pipeline), for the last frame it sees no DTS set, and produces a virtual value equal to the last seen. Then further in pipeline it consumes this input, sees that DTS value is being repeated, issues a warning and offsets it with increment by one, which might be somewhat nonsensical/unrealistic timing value.
I tried using -fflags +genpts as suggested in this SO answer, but that doesn't change anything.
Per yet other posts suggesting issue being with incompatible tbn and tbc values and possible timebase issues, I tried adding -time_base 1:90000 and -enc_time_base 1:90000 and -copytb 1 and nothing budges. The -video_track_timescale 90000 is there b/c it helped reduce those DTS warnings from 100s down to 3, but doesn't eliminate them all.
Question
What is missing and how can I get ffmpeg to perform conversions without these warnings, to be sure it produces proper, well-formed output?
I am trying to add my srt file to the video using the options which were answered here before. My input file has captions in it by default. I tried in different ways to get the captions enabled in my encoded video. Below are the commands I used.
ffmpeg -i input.ts -i captions.srt -b:a 32000 -ar 48000 -force_key_frames 'expr:gte(t,n_forced*3)' -acodec libfaac -hls_flags single_file -hls_list_size 0 -hls_time 3 -vcodec libx264 -s 320x240 -b:v 512000 -maxrate 512000 -c:s mov_text outfile.ts
But I couldn't see the captions after I see the mediainfo of the encoded file. You can see the log file of my command.
[mpegts # 0x56412e67b0c0] max_analyze_duration 5000000 reached at 5024000 microseconds st:1
input.ts FPS 29.970030 1
Input #0, mpegts, from 'input.ts':
Duration: 00:03:00.07, start: 1.400000, bitrate: 2172 kb/s
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0[0x100]: Video: mpeg2video (Main), 1 reference frame ([2][0][0][0] / 0x0002), yuv420p(tv), 704x480 [SAR 10:11 DAR 4:3], Closed Captions, max. 15000 kb/s,
29.97 fps, 29.97 tbr, 90k tbn, 59.94 tbc
Stream #0:1[0x101](eng): Audio: ac3 ([129][0][0][0] / 0x0081), 48000 Hz, stereo, fltp, 192 kb/s
Input #1, srt, from 'captions.srt':
Duration: N/A, bitrate: N/A
Stream #1:0: Subtitle: subrip
[graph 0 input from stream 0:0 # 0x56412e678540] w:704 h:480 pixfmt:yuv420p tb:1/90000 fr:30000/1001 sar:10/11 sws_param:flags=2
[scaler for output stream 0:0 # 0x56412e9caac0] w:320 h:240 flags:'bicubic' interl:0
[scaler for output stream 0:0 # 0x56412e9caac0] w:704 h:480 fmt:yuv420p sar:10/11 -> w:320 h:240 fmt:yuv420p sar:1/1 flags:0x4
[graph 1 input from stream 0:1 # 0x56412e9f74e0] tb:1/48000 samplefmt:fltp samplerate:48000 chlayout:0x3
[audio format for output stream 0:1 # 0x56412e9f7b40] auto-inserting filter 'auto-inserted resampler 0' between the filter 'Parsed_anull_0' and the filter 'audio format
for output stream 0:1'
[auto-inserted resampler 0 # 0x56412e9f9f20] ch:2 chl:stereo fmt:fltp r:48000Hz -> ch:2 chl:stereo fmt:s16 r:48000Hz
[libx264 # 0x56412e9bb8c0] VBV maxrate specified, but no bufsize, ignored
[libx264 # 0x56412e9bb8c0] using SAR=1/1
[libx264 # 0x56412e9bb8c0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2
[libx264 # 0x56412e9bb8c0] profile High, level 2.0
[mpegts # 0x56412e9ba5a0] muxrate VBR, pcr every 2 pkts, sdt every 200, pat/pmt every 40 pkts
Output #0, mpegts, to 'my_encoded_all-3.ts':
Metadata:
encoder : Lavf57.25.100
Stream #0:0: Video: h264 (libx264), -1 reference frame, yuv420p, 320x240 [SAR 1:1 DAR 4:3], q=-1--1, 512 kb/s, 29.97 fps, 90k tbn, 29.97 tbc
Metadata:
encoder : Lavc57.24.102 libx264
Side data:
unknown side data type 10 (24 bytes)
Stream #0:1(eng): Audio: aac (libfaac), 48000 Hz, stereo, s16, 32 kb/s
Metadata:
encoder : Lavc57.24.102 libfaac
Stream #0:2(eng): Subtitle: subrip (srt), 320x240
Metadata:
encoder : Lavc57.24.102 srt
Stream mapping:
Stream #0:0 -> #0:0 (mpeg2video (native) -> h264 (libx264))
Stream #0:1 -> #0:1 (ac3 (native) -> aac (libfaac))
Stream #1:0 -> #0:2 (subrip (srt) -> subrip (srt))
Press [q] to stop, [?] for help
[scaler for output stream 0:0 # 0x56412e9caac0] w:704 h:480 fmt:yuv420p sar:40/33 -> w:320 h:240 fmt:yuv420p sar:4/3 flags:0x4
No more output streams to write to, finishing.e=00:02:58.85 bitrate= 658.4kbits/s speed=13.2x
frame= 5377 fps=385 q=-1.0 Lsize= 16672kB time=00:59:16.18 bitrate= 38.4kbits/s speed= 255x
video:11401kB audio:1410kB subtitle:446kB other streams:0kB global headers:0kB muxing overhead: 25.753614%
Input file #0 (input.ts):
Input stream #0:0 (video): 5380 packets read (40279443 bytes); 5377 frames decoded;
Input stream #0:1 (audio): 5625 packets read (4320000 bytes); 5625 frames decoded (8640000 samples);
Total: 11005 packets (44599443 bytes) demuxed
Input file #1 (captions.srt):
Input stream #1:0 (subtitle): 10972 packets read (447147 bytes); 10972 frames decoded;
Total: 10972 packets (447147 bytes) demuxed
Output file #0 (output.ts):
Output stream #0:0 (video): 5377 frames encoded; 5377 packets muxed (11675098 bytes);
Output stream #0:1 (audio): 8438 frames encoded (8640000 samples); 8439 packets muxed (1444109 bytes);
Output stream #0:2 (subtitle): 10972 frames encoded; 10972 packets muxed (456619 bytes);
Total: 24788 packets (13575826 bytes) muxed
[libx264 # 0x56412e9bb8c0] frame I:81 Avg QP:15.08 size: 16370
[libx264 # 0x56412e9bb8c0] frame P:2312 Avg QP:17.77 size: 3393
[libx264 # 0x56412e9bb8c0] frame B:2984 Avg QP:22.38 size: 839
[libx264 # 0x56412e9bb8c0] consecutive B-frames: 20.6% 13.2% 9.4% 56.8%
[libx264 # 0x56412e9bb8c0] mb I I16..4: 11.6% 37.0% 51.4%
[libx264 # 0x56412e9bb8c0] mb P I16..4: 1.2% 3.2% 2.4% P16..4: 33.7% 18.0% 15.7% 0.0% 0.0% skip:25.6%
[libx264 # 0x56412e9bb8c0] mb B I16..4: 0.2% 0.3% 0.2% B16..8: 30.7% 9.2% 3.2% direct: 5.1% skip:51.1% L0:33.6% L1:44.5% BI:21.9%
[libx264 # 0x56412e9bb8c0] final ratefactor: 16.76
[libx264 # 0x56412e9bb8c0] 8x8 transform intra:43.0% inter:49.8%
[libx264 # 0x56412e9bb8c0] coded y,uvDC,uvAC intra: 77.9% 85.1% 68.8% inter: 22.9% 21.7% 6.0%
[libx264 # 0x56412e9bb8c0] i16 v,h,dc,p: 30% 38% 5% 26%
[libx264 # 0x56412e9bb8c0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 21% 23% 18% 4% 5% 7% 6% 7% 8%
[libx264 # 0x56412e9bb8c0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 28% 23% 11% 5% 6% 8% 6% 7% 6%
[libx264 # 0x56412e9bb8c0] i8c dc,h,v,p: 46% 24% 21% 8%
[libx264 # 0x56412e9bb8c0] Weighted P-Frames: Y:4.2% UV:2.6%
[libx264 # 0x56412e9bb8c0] ref P L0: 73.3% 10.9% 11.1% 4.5% 0.1%
[libx264 # 0x56412e9bb8c0] ref B L0: 92.1% 6.3% 1.6%
[libx264 # 0x56412e9bb8c0] ref B L1: 97.4% 2.6%
I found no issues in encoding but I couldn't see the captions enabled in my output encoded video. I played it in my VLC player. No tracks for subtitles.
Can't we add the subtitles to a video while encoding?
Any help in achieving this would be appreciated.
Folks, I have the following ffmpeg command:
ffmpeg
-i video1a -i video2a -i video3a -i video4a
-i video1b -i video2b -i video3b -i video4b
-i video1c
-filter_complex "
nullsrc=size=640x480 [base];
[0:v] setpts=PTS-STARTPTS+ 0/TB, scale=320x240 [1a];
[1:v] setpts=PTS-STARTPTS+ 300/TB, scale=320x240 [2a];
[2:v] setpts=PTS-STARTPTS+ 400/TB, scale=320x240 [3a];
[3:v] setpts=PTS-STARTPTS+ 400/TB, scale=320x240 [4a];
[4:v] setpts=PTS-STARTPTS+2500/TB, scale=320x240 [1b];
[5:v] setpts=PTS-STARTPTS+ 800/TB, scale=320x240 [2b];
[6:v] setpts=PTS-STARTPTS+ 700/TB, scale=320x240 [3b];
[7:v] setpts=PTS-STARTPTS+ 800/TB, scale=320x240 [4b];
[8:v] setpts=PTS-STARTPTS+3000/TB, scale=320x240 [1c];
[base][1a] overlay=eof_action=pass [o1];
[o1][1b] overlay=eof_action=pass [o1];
[o1][1c] overlay=eof_action=pass:shortest=1 [o1];
[o1][2a] overlay=eof_action=pass:x=320 [o2];
[o2][2b] overlay=eof_action=pass:x=320 [o2];
[o2][3a] overlay=eof_action=pass:y=240 [o3];
[o3][3b] overlay=eof_action=pass:y=240 [o3];
[o3][4a] overlay=eof_action=pass:x=320:y=240[o4];
[o4][4b] overlay=eof_action=pass:x=320:y=240"
-c:v libx264 output.mp4
I have just found out something regarding the files I will be processing with above command: that some mp4 files are video/audio, some mp4 files are audio alone and some mp4 files are video alone. I am already able to determine which ones have audio/video/both using ffprobe. My question is how do I modify above command to state what each file contains (video/audio/both).
This is the scenario of which file has video/audio/both:
video time
======= =========
Area 1:
video1a audio
video1b both
video1c video
Area 2:
video2a video
video2b audio
Area 3:
video3a video
video3b audio
Area 4:
video4a video
video4b both
My question is how to correctly modify command above to specify what the file has (audio/video/both). Thank you.
Update #1
I ran test as follows:
-i "video1a.flv"
-i "video1b.flv"
-i "video1c.flv"
-i "video2a.flv"
-i "video3a.flv"
-i "video4a.flv"
-i "video4b.flv"
-i "video4c.flv"
-i "video4d.flv"
-i "video4e.flv"
-filter_complex
nullsrc=size=640x480[base];
[0:v]setpts=PTS-STARTPTS+120/TB,scale=320x240[1a];
[1:v]setpts=PTS-STARTPTS+3469115/TB,scale=320x240[1b];
[2:v]setpts=PTS-STARTPTS+7739299/TB,scale=320x240[1c];
[5:v]setpts=PTS-STARTPTS+4390466/TB,scale=320x240[4a];
[6:v]setpts=PTS-STARTPTS+6803937/TB,scale=320x240[4b];
[7:v]setpts=PTS-STARTPTS+8242005/TB,scale=320x240[4c];
[8:v]setpts=PTS-STARTPTS+9811577/TB,scale=320x240[4d];
[9:v]setpts=PTS-STARTPTS+10765190/TB,scale=320x240[4e];
[base][1a]overlay=eof_action=pass[o1];
[o1][1b]overlay=eof_action=pass[o1];
[o1][1c]overlay=eof_action=pass:shortest=1[o1];
[o1][4a]overlay=eof_action=pass:x=320:y=240[o4];
[o4][4b]overlay=eof_action=pass:x=320:y=240[o4];
[o4][4c]overlay=eof_action=pass:x=320:y=240[o4];
[o4][4d]overlay=eof_action=pass:x=320:y=240[o4];
[o4][4e]overlay=eof_action=pass:x=320:y=240;
[0:a]asetpts=PTS-STARTPTS+120/TB,aresample=async=1,apad[a1a];
[1:a]asetpts=PTS-STARTPTS+3469115/TB,aresample=async=1,apad[a1b];
[2:a]asetpts=PTS-STARTPTS+7739299/TB,aresample=async=1[a1c];
[3:a]asetpts=PTS-STARTPTS+82550/TB,aresample=async=1,apad[a2a];
[4:a]asetpts=PTS-STARTPTS+2687265/TB,aresample=async=1,apad[a3a];
[a1a][a1b][a1c][a2a][a3a]amerge=inputs=5
-c:v libx264 -c:a aac -ac 2 output.mp4
This is the stream data from ffmpeg:
Input #0
Stream #0:0: Video: vp6f, yuv420p, 160x128, 1k tbr, 1k tbn
Stream #0:1: Audio: nellymoser, 11025 Hz, mono, flt
Input #1
Stream #1:0: Audio: nellymoser, 11025 Hz, mono, flt
Stream #1:1: Video: vp6f, yuv420p, 160x128, 1k tbr, 1k tbn
Input #2
Stream #2:0: Audio: nellymoser, 11025 Hz, mono, flt
Stream #2:1: Video: vp6f, yuv420p, 160x128, 1k tbr, 1k tbn
Input #3
Stream #3:0: Audio: nellymoser, 11025 Hz, mono, flt
Input #4
Stream #4:0: Audio: nellymoser, 11025 Hz, mono, flt
Input #5
Stream #5:0: Video: vp6f, yuv420p, 1680x1056, 1k tbr, 1k tbn
Input #6
Stream #6:0: Video: vp6f, yuv420p, 1680x1056, 1k tbr, 1k tbn
Input #7
Stream #7:0: Video: vp6f, yuv420p, 1680x1056, 1k tbr, 1k tbn
Input #8
Stream #8:0: Video: vp6f, yuv420p, 1680x1056, 1k tbr, 1k tbn
Input #9
Stream #9:0: Video: vp6f, yuv420p, 1680x1056, 1k tbr, 1k tbn
This is the error:
Stream mapping:
Stream #0:0 (vp6f) -> setpts
Stream #0:1 (nellymoser) -> asetpts
Stream #1:0 (nellymoser) -> asetpts
Stream #1:1 (vp6f) -> setpts
Stream #2:0 (nellymoser) -> asetpts
Stream #2:1 (vp6f) -> setpts
Stream #3:0 (nellymoser) -> asetpts
Stream #4:0 (nellymoser) -> asetpts
Stream #5:0 (vp6f) -> setpts
Stream #6:0 (vp6f) -> setpts
Stream #7:0 (vp6f) -> setpts
Stream #8:0 (vp6f) -> setpts
Stream #9:0 (vp6f) -> setpts
overlay -> Stream #0:0 (libx264)
amerge -> Stream #0:1 (aac)
Press [q] to stop, [?] for help
Enter command: <target>|all <time>|-1 <command>[ <argument>]
Parse error, at least 3 arguments were expected, only 1 given in string 'ho Oscar'
[Parsed_amerge_39 # 0aa147c0] No channel layout for input 1
Last message repeated 1 times
[AVFilterGraph # 05e01900] The following filters could not choose their formats: Parsed_amerge_39
Consider inserting the (a)format filter near their input or output.
Error reinitializing filters!
Failed to inject frame into filter network: I/O error
Error while processing the decoded data for stream #4:0
Conversion failed!
Update #2
Would it be like this:
-i "video1a.flv"
-i "video1b.flv"
-i "video1c.flv"
-i "video2a.flv"
-i "video3a.flv"
-i "video4a.flv"
-i "video4b.flv"
-i "video4c.flv"
-i "video4d.flv"
-i "video4e.flv"
-filter_complex
nullsrc=size=640x480[base];
[0:v]setpts=PTS-STARTPTS+120/TB,scale=320x240[1a];
[1:v]setpts=PTS-STARTPTS+3469115/TB,scale=320x240[1b];
[2:v]setpts=PTS-STARTPTS+7739299/TB,scale=320x240[1c];
[5:v]setpts=PTS-STARTPTS+4390466/TB,scale=320x240[4a];
[6:v]setpts=PTS-STARTPTS+6803937/TB,scale=320x240[4b];
[7:v]setpts=PTS-STARTPTS+8242005/TB,scale=320x240[4c];
[8:v]setpts=PTS-STARTPTS+9811577/TB,scale=320x240[4d];
[9:v]setpts=PTS-STARTPTS+10765190/TB,scale=320x240[4e];
[base][1a]overlay=eof_action=pass[o1];
[o1][1b]overlay=eof_action=pass[o1];
[o1][1c]overlay=eof_action=pass:shortest=1[o1];
[o1][4a]overlay=eof_action=pass:x=320:y=240[o4];
[o4][4b]overlay=eof_action=pass:x=320:y=240[o4];
[o4][4c]overlay=eof_action=pass:x=320:y=240[o4];
[o4][4d]overlay=eof_action=pass:x=320:y=240[o4];
[o4][4e]overlay=eof_action=pass:x=320:y=240;
[0:a]asetpts=PTS-STARTPTS+120/TB,aresample=async=1,pan=1c|c0=c0,apad[a1a];
[1:a]asetpts=PTS-STARTPTS+3469115/TB,aresample=async=1,pan=1c|c0=c0,apad[a1b];
[2:a]asetpts=PTS-STARTPTS+7739299/TB,aresample=async=1,pan=1c|c0=c0[a1c];
[3:a]asetpts=PTS-STARTPTS+82550/TB,aresample=async=1,pan=1c|c0=c0,apad[a2a];
[4:a]asetpts=PTS-STARTPTS+2687265/TB,aresample=async=1,pan=1c|c0=c0,apad[a3a];
[a1a][a1b][a1c][a2a][a3a]amerge=inputs=5
-c:v libx264 -c:a aac -ac 2 output.mp4
Update #3
Now getting this error:
Stream mapping:
Stream #0:0 (vp6f) -> setpts
Stream #0:1 (nellymoser) -> asetpts
Stream #1:0 (nellymoser) -> asetpts
Stream #1:1 (vp6f) -> setpts
Stream #2:0 (nellymoser) -> asetpts
Stream #2:1 (vp6f) -> setpts
Stream #3:0 (nellymoser) -> asetpts
Stream #4:0 (nellymoser) -> asetpts
Stream #5:0 (vp6f) -> setpts
Stream #6:0 (vp6f) -> setpts
Stream #7:0 (vp6f) -> setpts
Stream #8:0 (vp6f) -> setpts
Stream #9:0 (vp6f) -> setpts
overlay -> Stream #0:0 (libx264)
amerge -> Stream #0:1 (aac)
Press [q] to stop, [?] for help
Enter command: <target>|all <time>|-1 <command>[ <argument>]
Parse error, at least 3 arguments were expected, only 1 given in string 'ho Oscar'
[Parsed_amerge_44 # 0a9808c0] No channel layout for input 1
[Parsed_amerge_44 # 0a9808c0] Input channel layouts overlap: output layout will be determined by the number of distinct input channels
[Parsed_pan_27 # 07694800] Pure channel mapping detected: 0
[Parsed_pan_31 # 07694a80] Pure channel mapping detected: 0
[Parsed_pan_35 # 0a980300] Pure channel mapping detected: 0
[Parsed_pan_38 # 0a980500] Pure channel mapping detected: 0
[Parsed_pan_42 # 0a980780] Pure channel mapping detected: 0
[libx264 # 06ad78c0] using SAR=1/1
[libx264 # 06ad78c0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
[libx264 # 06ad78c0] profile High, level 3.0
[libx264 # 06ad78c0] 264 - core 155 r2901 7d0ff22 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=15 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'output.mp4':
Metadata:
canSeekToEnd : false
encoder : Lavf58.16.100
Stream #0:0: Video: h264 (libx264) (avc1 / 0x31637661), yuv420p(progressive), 640x480 [SAR 1:1 DAR 4:3], q=-1--1, 25 fps, 12800 tbn, 25 tbc (default)
Metadata:
encoder : Lavc58.19.102 libx264
Side data:
cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
Stream #0:1: Audio: aac (LC) (mp4a / 0x6134706D), 11025 Hz, stereo, fltp, 128 kb/s (default)
Metadata:
encoder : Lavc58.19.102 aac
...
...
Error while processing the decoded data for stream #1:1
[libx264 # 06ad78c0] frame I:133 Avg QP: 8.58 size: 6481
[libx264 # 06ad78c0] frame P:8358 Avg QP:17.54 size: 1386
[libx264 # 06ad78c0] frame B:24582 Avg QP:24.27 size: 105
[libx264 # 06ad78c0] consecutive B-frames: 0.6% 0.5% 0.7% 98.1%
[libx264 # 06ad78c0] mb I I16..4: 78.3% 16.1% 5.6%
[libx264 # 06ad78c0] mb P I16..4: 0.3% 0.7% 0.1% P16..4: 9.6% 3.0% 1.4% 0.0% 0.0% skip:84.9%
[libx264 # 06ad78c0] mb B I16..4: 0.0% 0.0% 0.0% B16..8: 2.9% 0.1% 0.0% direct: 0.2% skip:96.8% L0:47.0% L1:49.0% BI: 4.0%
[libx264 # 06ad78c0] 8x8 transform intra:35.0% inter:70.1%
[libx264 # 06ad78c0] coded y,uvDC,uvAC intra: 36.8% 43.7% 27.3% inter: 1.6% 3.0% 0.1%
[libx264 # 06ad78c0] i16 v,h,dc,p: 79% 8% 4% 9%
[libx264 # 06ad78c0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 32% 20% 12% 3% 6% 8% 6% 5% 7%
[libx264 # 06ad78c0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 38% 22% 9% 4% 6% 7% 5% 5% 4%
[libx264 # 06ad78c0] i8c dc,h,v,p: 62% 15% 16% 7%
[libx264 # 06ad78c0] Weighted P-Frames: Y:0.6% UV:0.5%
[libx264 # 06ad78c0] ref P L0: 65.4% 12.3% 14.3% 7.9% 0.0%
[libx264 # 06ad78c0] ref B L0: 90.2% 7.5% 2.3%
[libx264 # 06ad78c0] ref B L1: 96.3% 3.7%
[libx264 # 06ad78c0] kb/s:90.81
[aac # 06ad8480] Qavg: 65519.970
[aac # 06ad8480] 2 frames left in the queue on closing
Conversion failed!
Use
ffmpeg
-i video1a -i video2a -i video3a -i video4a
-i video1b -i video2b -i video3b -i video4b
-i video1c
-filter_complex "
nullsrc=size=640x480 [base];
[1:v] setpts=PTS-STARTPTS+ 300/TB, scale=320x240 [2a];
[2:v] setpts=PTS-STARTPTS+ 400/TB, scale=320x240 [3a];
[3:v] setpts=PTS-STARTPTS+ 400/TB, scale=320x240 [4a];
[4:v] setpts=PTS-STARTPTS+2500/TB, scale=320x240 [1b];
[7:v] setpts=PTS-STARTPTS+2500/TB, scale=320x240 [4b];
[8:v] setpts=PTS-STARTPTS+3000/TB, scale=320x240 [1c];
[base][1b] overlay=eof_action=pass [o1];
[o1][1c] overlay=eof_action=pass:shortest=1 [o1];
[o1][2a] overlay=eof_action=pass:x=320 [o2];
[o2][3a] overlay=eof_action=pass:y=240 [o3];
[o3][4a] overlay=eof_action=pass:x=320:y=240[o4];
[o4][4b] overlay=eof_action=pass:x=320:y=240;
[0:a] asetpts=PTS-STARTPTS+ 0/TB, aresample=async=1, apad [a1a];
[4:a] asetpts=PTS-STARTPTS+2500/TB, aresample=async=1 [a1b];
[5:a] asetpts=PTS-STARTPTS+ 800/TB, aresample=async=1, apad [a2b];
[6:a] asetpts=PTS-STARTPTS+ 700/TB, aresample=async=1, apad [a3b];
[7:a] asetpts=PTS-STARTPTS+ 800/TB, aresample=async=1, apad [a4b];
[a1a][a1b][a2b][a3b][a4b]amerge=inputs=5"
-c:v libx264 -c:a aac -ac 2 output.mp4
For each video stream, the timestamp and scale filters should be applied, and finally overlaid.
For each audio stream, timestamp filters should be applied for time offset, then aresample to insert silence till the start time, then apad to extend the end of the audio with silence. The apad should be skipped for the audio stream which ends last. The amerge joins all processed audio streams and ends with the stream when the last audio ends.
So I want to try and use the ffmpeg MAP command to do the following:
Merge audio and jpg
Concatenate mp4 to end
Still.jpg
audio.mp3 -> video.mp4 = output.mp4
Right now I have
/home/admin/ffmpeg/ffmpeg -i still.jpg -i audio.mp3 render1.mp4
But I want to also go ahead and add on the video.mp4 as well. I tried reading the map function but I'm so confused as to how you know how many streams a file has, etc
Heres the output of the command....note I have different file names
Input #0, image2, from 'slide_2.jpg':
Duration: 00:00:00.04, start: 0.000000, bitrate: 8276 kb/s
Stream #0:0: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 1280x720 [SAR 1:1 DAR 16:9], 25 tbr, 25 tbn, 25 tbc
Input #1, mp3, from '/home/admin/web/admin.simplewebevents.com/public_html/cron/steveng1.mp3':
Metadata:
encoder : Lavf57.56.100
Duration: 00:00:05.78, start: 0.023021, bitrate: 64 kb/s
Stream #1:0: Audio: mp3, 48000 Hz, mono, s16p, 64 kb/s
File 'introFile62.mp4' already exists. Overwrite ? [y/N] y
Stream mapping:
Stream #0:0 -> #0:0 (mjpeg (native) -> h264 (libx264))
Stream #1:0 -> #0:1 (mp3 (native) -> aac (native))
Press [q] to stop, [?] for help
No pixel format specified, yuvj420p for H.264 encoding chosen.
Use -pix_fmt yuv420p for compatibility with outdated media players.
[libx264 # 0x3589a00] using SAR=1/1
[libx264 # 0x3589a00] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
[libx264 # 0x3589a00] profile High, level 3.1
[libx264 # 0x3589a00] 264 - core 148 - H.264/MPEG-4 AVC codec - Copyleft 2003-2017 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=1 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'introFile62.mp4':
Metadata:
encoder : Lavf57.72.101
Stream #0:0: Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuvj420p(pc, progressive), 1280x720 [SAR 1:1 DAR 16:9], q=-1--1, 25 fps, 12800 tbn, 25 tbc
Metadata:
encoder : Lavc57.96.101 libx264
Side data:
cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
Stream #0:1: Audio: aac (LC) ([64][0][0][0] / 0x0040), 48000 Hz, mono, fltp, 69 kb/s
Metadata:
encoder : Lavc57.96.101 aac
frame= 1 fps=0.2 q=28.0 Lsize= 70kB time=00:00:05.76 bitrate= 99.2kbits/s speed=1.27x
video:18kB audio:49kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 3.483214%
[libx264 # 0x3589a00] frame I:1 Avg QP:25.71 size: 18268
[libx264 # 0x3589a00] mb I I16..4: 25.6% 63.4% 10.9%
[libx264 # 0x3589a00] 8x8 transform intra:63.4%
[libx264 # 0x3589a00] coded y,uvDC,uvAC intra: 14.3% 18.5% 8.9%
[libx264 # 0x3589a00] i16 v,h,dc,p: 81% 9% 10% 1%
[libx264 # 0x3589a00] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 50% 12% 33% 1% 1% 1% 0% 1% 1%
[libx264 # 0x3589a00] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 46% 21% 13% 3% 4% 5% 2% 4% 2%
[libx264 # 0x3589a00] i8c dc,h,v,p: 76% 8% 15% 1%
[libx264 # 0x3589a00] kb/s:3653.60
[aac # 0x358aea0] Qavg: 118.057
I am trying to convert raw video file captured from Cisco EX60 to valid MP4 file.
I use the following command
ffmpeg -f rawvideo -pix_fmt yuv420p -s:v 1920x1080 -r 25 -i input2 -vcodec libx264 output2.mp4
and get
libpostproc 54. 0.100 / 54. 0.100
[rawvideo # 0000000000703920] Estimating duration from bitrate, this may be inac
curate
Input #0, rawvideo, from 'input2':
Duration: 00:00:00.20, start: 0.000000, bitrate: 630883 kb/s
Stream #0:0: Video: rawvideo (I420 / 0x30323449), yuv420p, 1920x1080, 622080
kb/s, 25 tbr, 25 tbn, 25 tbc
File 'output2.mp4' already exists. Overwrite ? [y/N] y
[libx264 # 00000000007115e0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2
AVX FMA3 AVX2 LZCNT BMI2
[libx264 # 00000000007115e0] profile High, level 4.0
[libx264 # 00000000007115e0] 264 - core 148 r2638 7599210 - H.264/MPEG-4 AVC cod
ec - Copyleft 2003-2015 - http://www.videolan.org/x264.html - options: cabac=1 r
ef=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed
_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pski
p=1 chroma_qp_offset=-2 threads=6 lookahead_threads=1 sliced_threads=0 nr=0 deci
mate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_
adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=2
5 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.6
0 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00
Output #0, mp4, to 'output2.mp4':
Metadata:
encoder : Lavf57.18.100
Stream #0:0: Video: h264 (libx264) ([33][0][0][0] / 0x0021), yuv420p, 1920x1
080, q=-1--1, 25 fps, 12800 tbn, 25 tbc
Metadata:
encoder : Lavc57.15.100 libx264
Stream mapping:
Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[rawvideo # 00000000007101c0] ****Invalid buffer size, packet size 220075 < expected
frame_size 3110400
Error while decoding stream #0:0: Invalid argument****
frame= 5 fps=2.5 q=-1.0 Lsize= 6514kB time=00:00:00.12 bitrate=444706.9kbi
ts/s
video:6513kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing
overhead: 0.013209%
[libx264 # 00000000007115e0] frame I:1 Avg QP:35.99 size:1334571
[libx264 # 00000000007115e0] frame P:4 Avg QP:35.00 size:1333616
[libx264 # 00000000007115e0] mb I I16..4: 0.0% 0.0% 100.0%
[libx264 # 00000000007115e0] mb P I16..4: 98.7% 0.0% 1.3% P16..4: 0.0% 0.0
% 0.0% 0.0% 0.0% skip: 0.0%
with
Invalid buffer size, packet size 220075 < expected
frame_size 3110400
Error while decoding stream #0:0: Invalid argument**
inside
When I use just
ffmpeg -f h264 -i input -vcodec copy -r 25 outfile.mp4
It replace initial IFrame with B-frames so I cannot playback it. I can view it with VCL, but not with Windows Media player, for example.
What is wrong with the command?
Thanks
Efim
Stream #0:0: Video: h264 (libx264)
This is the source video stream, which is already encoded with h264. So it’s unlikely that rawvideo will work. You actually have to decode the stream in order to encode it again.
As for why you can’t play it back in Windows Media Player, please check the H.264 encoding guide. By default the highest profile is used when encoding as h264 which is not compatible with every device and player. For maximum compatibility, add the following options:
-profile:v baseline -level 3.0