Changing resolution mid-video with FFMPEG - ffmpeg

I have a source video (mpeg2video) which I'm transcoding to x264. The source contains 2 different programs recorded from TV. One is in 4:3 AR and the other 16:9 AR. When I play the source file through VLC the player correctly changes size to show the video at the correct AR. So far so good.
When I transcode the conversion process auto detects the AR from the first few frames and then transcodes the whole video using this AR. If the 16:9 section comes first then the whole conversion is done in 16:9 and the 4:3 section looks stretch horizontally. If the 4:3 section is at the start of the source file then the whole transcode is done in 4:3 and the 16:9 section looks squashed horizontally.
No black bars are ever visible.
Here's my command:
nice -n 17 ffmpeg -i source.mpg -acodec libfaac -ar 48000 -ab 192k -async 1 -copyts -vcodec libx264 -b 1250k -threads 2 -level 31 -map 0:0 -map 0:1 -map 0:2 -scodec copy -deinterlace output.mkv
I don't fully understand what's going on. How do I get the same 'change in AR' mid video in the output file that I have in the input video?

I don't think ffmpeg is designed to do that midway. You will have to write your own application using libav for it. The simpler way would be create two chunks of video that you combine.
EDIT:
The best way to deal with it is if you can detect the change of AR yourself and transcode the two segments seperately and join them.
EDIT2:
Use ffmpeg itself to chunk the video, demux anything you want and mux it back again. It should work fine. You needn't use avidemux.

Related

Using ffmpeg, jpg to mp4 to mpegts, play with HLS M3U8, only first TS file plays - why?

Before posting I have searched and found similar questions on stackoverflow (I list some below) - none have helped me towards a solution, hence this post. The duration that each image is shown within the movie file differs from many posts that I have seen thus far.
A camera captures 1 image every 30 seconds. I need stream them, preferably via HLS, thus I wrap 2 images in an MP4. I then convert MP4 to mpegts. Each MP4 and TS file play fine individually (each contain two images, each image transitions after 30seconds, each movie file is 1minute long).
When I reference the two TS files in an M3U8 playlist, only the first TS file gets played. Can anyone advise why it stops and how I can get it to play all the TS files that I expect to create, not just the first TS file? Besides my ffmpeg commands, I also include my VLC log file (though I expect to stream to Firefox/Chrome clients). I am using ffmpeg 4.2.2-static installed on an AWS EC2 with AMI2 Linux.
I have four jpgs named image11.jpg, image12.jpg, image21.jpg, image22.jpg - The images look near identical as only the timestamp in top left changes.
The following command creates 1.mp4, using image11.jpg and image12.jpg, each image displayed for 30 seconds, total duration of the mp4 is 1 minute. It plays like expected.
ffmpeg -y -framerate 1/30 -f image2 -i image1%1d.jpg -c:v libx264 -vf "fps=1,format=yuvj420p" 1.mp4
I then convert 1.mp4 to an mpegts file, creating 1.ts. It plays like expected.
ffmpeg -y -i 1.mp4 -c:v libx264 -vbsf h264_mp4toannexb -flags -global_header -f mpegts 1.ts
I repeat the above steps except specific to image21.jpg and image22.jpg, creating 2.mp4 and 2.ts
ffmpeg -y -framerate 1/30 -f image2 -i image1%1d.jpg -c:v libx264 -vf "fps=1,format=yuvj420p" 2.mp4
ffmpeg -y -i 1.mp4 -c:v libx264 -vbsf h264_mp4toannexb -flags -global_header -f mpegts 2.ts
Thus now I have 1.mp4, 1.ts, 2.mp4, 2.ts and all four play individually just fine.
Using ffprobe I can confirm their duration is 60seconds, for example:
ffprobe -i 1.ts -v quiet -show_entries format=duration -hide_banner -print_format json
My m3u8 playlist follows:
#EXTM3U
#EXT-X-VERSION:4
#EXT-X-PLAYLIST-TYPE:VOD
#EXT-X-MEDIA-SEQUENCE:1
#EXT-X-TARGETDURATION:60.000
#EXTINF:60.0000,
1.ts
#EXTINF:60.000,
2.ts
#EXT-X-ENDLIST
Can anyone advise where I am going wrong?
VLC Error Log (though I expect to play via web browser)
I have researched the process using these (and other pages) as a guide:
How to create a video from images with ffmpeg
convert from jpg to mp4 by ffmpeg
ffmpeg examples page
FFMPEG An Intermediate Guide/image sequence
How to use FFmpeg to convert images to video
Take a look at the start_pts/start_time in the ffprobe -show_streams output, my guess is that they all start at zero/near-zero which will cause playback to fail after your first segment.
You can still produce them independently but you will want to use something like -output_ts_offset to correctly set the timestamps for subsequent segments.
The following solution works well for me. I have tested it uninterrupted for more than two hours and believe it ticks all my boxes. (Edited because I forgot the all important -re tag)
ffmpeg will loop continuously, reading test.jpg and stream it to my RTMP server. When my camera posts an image every 30seconds, I copy the new image on top of the existing test.jpg which in effect changes what is streamed out.
Note the command below is all one line, I have put new lines in to assist reading and The order of the parameters are important - the loop and fflags genpts for example must appear before the -i parameter
ffmpeg
-re
-loop 1
-fflags +genpts
-framerate 1/30
-i test.jpg
-c:v libx264
-vf fps=25
-pix_fmt yuvj420p
-crf 30
-f fifo -attempt_recovery 1 -recovery_wait_time 1
-f flv rtmp://localhost:5555/video/test
Some arguments explained:
-re implies play in real time
loop 1 (1 turns the loop on, 0 off)
-fflags +genpts is something I only half understand. PTS I believe is the start/end time of the segment and without this flag, the PTS is reset to zero with every new image. Using this arguement means I avoid EXT-X-DISCONTINUITY when a new image is served.
-framerate 1/30 means one frame for 30seconds
-i test.jpg is my image 'placeholder'. As new images are received via a separate script, it overwrites this image. When combined with loop it means the ffmpeg output will reference the new image.
-c:v libx264 is for H264 video output formating
-vf fps=25 Removing this, or using a different value resulted in my output stream not being 30seconds.
-pix_fmt yuvj420p (sometimes I have seen yuv420p referenced but this did not work on my environment). I believe there are different jpg colour palettes and this switch ensures I can process a wider choice.
-crf 30 implies highest quality image, lowest compression (important for my client)
-f fifo -attempt_recovery 1 -recovery_wait_time 1 -f flv rtmp://localhost:5555/video/test is part of the magic to go with loop. I believe it keeps the connection open with my stream server, reduces the risk of DISCONTINUITY in the play list.
I hope this helps someone going forward.
The following links helped nudge me forward and I share as it might help others to improve upon my solution
Creating a video from a single image for a specific duration in ffmpeg
How can I loop one frame with ffmpeg? All the other frames should point to the first with no changes, maybe like a recusion
Display images on video at specific framerate with loop using FFmpeg
Loop image ffmpeg HLS
https://trac.ffmpeg.org/wiki/Slideshow
https://superuser.com/questions/1699893/generate-ts-stream-from-image-file
https://ffmpeg.org/ffmpeg-formats.html#Examples-3
https://trac.ffmpeg.org/wiki/StreamingGuide

ffmpeg: How to keep audio synced when doing many (100) cuts with filter select='between(t,start,stop)+between...'

I am cutting out silent parts of a 45 minute video (a lecture).
To do this, I use a filter to select, say one hundred, non-silent parts (I already know their start and end times).
ffmpeg -i in.mp4
-vf "select='between(t,start_1,stop_1)+...+between(t,start_100,stop_100)', setpts=N/FRAME_RATE/TB"
-af "aselect='between(t,start_1,stop_1)+...+between(t,start_100,stop_100)', asetpts=N/SR/TB"
-c:a aac -c:v libx264 out.mp4
It works, but at the end of the video the images are delayed relative to the audio.
After reading this answer I also added
-shortest -avoid_negative_ts make_zero -fflags +genpts
at the end of the command. It didn't help.
As audio and video are concatenated independently I'm not surprised that tiny time errors due to finite frame rate add up.
Is there a solution that doesn't involve saving every non-silent part as a file?

Why images are combined (ffmpeg)?

I want to create video from images (one image per frame).
ffmpeg -framerate 21.533 -i %d.bmp -i z.wav -r 21.533 -t 120 -map 0:v:0 -map 1:a:0 -c:v libx265 -c:a aac -b:a 128k z.mp4
When I watch resulting video I see (at least at the end of video) that frames are combined with each other (2 images on each frame overlaps with different transparency ratio). I seems like when source and destination frame rate mismatch.
I can remove -framerate and -r options but result will be the same (with 25 fps).
What's the problem?
How to fix it?
The problem is that KMPlayer plays with frame mixing/overlapping.
The video is ok... Another player plays ok...

FFMPEG reduce fps for live h264 stream with direct copy

I found different articles on changing the fps with ffmpeg but none of them is matching for my exact purposes.
There is an ffmpeg command like below:
ffmpeg -i RTSPCAMERAPRODUCEH264 -c:v copy -an -movflags +frag_keyframe+empty_moov -f mp4
This will remux my camerastream to fragmented mp4 perfectly.
Is there a way to force ffmpeg to lower the FPS to save bandwidth?
I.e. camera streams 30fps, it needs 1Mbps for fmp4 (sample numbers!):
I'd like to know if it's possible to lower the FPS and get an output stream for which 500kbps (50% of original is enough) without re-encoding.
ffmpeg -r 1 -i RTSPCAMERAPRODUCEH264 -c:v copy -an -movflags +frag_keyframe+empty_moov -f mp4
and
ffmpeg -i RTSPCAMERAPRODUCEH264 -c:v copy -an -movflags +frag_keyframe+empty_moov -r 1 -f mp4
do not seem to work.
A temporally coded video stream (like one with H264 codec) cannot arbitrarily drop intermediate packets, so this is not possible. Only whole or trailing part of GOPs may be dropped.

How to asymmetrical video side by side?

I want asymmetrical side by side video with resolution 1920x1080. The first video has bitrate 1mb/s and the second video has bitrate 500kb/s. Both videos have the same resolution 1920x1080 and encoded h.265, container mp4.
I used ffmpeg code:
ffmpeg -i leftvideo.mp4 -i rightvideo.mp4 -filter_complex "[0:v] scale=iw/2:ih, pad=2*iw:ih [left]; [1:v] scale=iw/2:ih [right]; [left][right] overlay=main_w/2:0 [out]" -map [out] -c:v libx265 output.mp4
It works well but I want the resulting video quality while keeping. I don't want re-encoded.
Is it possible the two videos change resolution (960x1080) and together packed into container mp4?
EDIT: or another method?
Using ffmpeg
You are required to re-encode if you want to use filters in ffmpeg, but if you want to "keep the quality" you can use a lossless output:
ffmpeg -i left.mp4 -i right.mp4 -filter_complex \
"[0:v]scale=iw/2:ih[l];[1:v]scale=iw/2:ih[r];[l][r]hstack" \
-c:v libx264 -qp 0 output.mp4
The resulting file size may be huge. If this is not acceptable you can try a "visually lossless" output by changing -qp 0 to -crf 18.
You did not provide full details about your inputs, and did not mention audio, so I assumed you are not concerned with the audio.
You did not provide the complete console output from your command so I assumed your ffmpeg is new enough to use the hstack filter.
Using ffplay
Another option is to just use your player to play side-by-side and not even deal with re-encoding. Example using ffplay.
ffplay -f lavfi "movie=left.mp4,scale=iw/2:ih[v0];movie=right.mp4,scale=iw/2:ih[v1];[v0][v1]hstack"

Resources