Colors not accurate in ffmpeg video - ffmpeg

I am creating a video with ffmpeg by stringing together a bunch of PNG files. The resulting video has horizontal lines running across it and the colors are not accurate. Here's the command I used:
ffmpeg -framerate 1 -i img%04d.png -pix_fmt yuv420p timer.mp4
I am attaching an example of one of the input PNG files and a frame from the video. Can anyone tell what's wrong?
input file
video frame

Related

Converting images to video keeping GOP 1 using ffmpeg

I have a list of images, containing incremental integer values saved in png format starting from number 1, which need to be converted to a video with GOP 1 using ffmpeg. I have used the following command to convert the images to video and subsequently used ffplay to seek to a particular frame. The displayed frame doesn't match the frame being seek. Any help?
ffmpeg -i image%03d.png -c:v libx264 -g 1 -pix_fmt yuv420p out.mp4

Using ffmpeg, jpg to mp4 to mpegts, play with HLS M3U8, only first TS file plays - why?

Before posting I have searched and found similar questions on stackoverflow (I list some below) - none have helped me towards a solution, hence this post. The duration that each image is shown within the movie file differs from many posts that I have seen thus far.
A camera captures 1 image every 30 seconds. I need stream them, preferably via HLS, thus I wrap 2 images in an MP4. I then convert MP4 to mpegts. Each MP4 and TS file play fine individually (each contain two images, each image transitions after 30seconds, each movie file is 1minute long).
When I reference the two TS files in an M3U8 playlist, only the first TS file gets played. Can anyone advise why it stops and how I can get it to play all the TS files that I expect to create, not just the first TS file? Besides my ffmpeg commands, I also include my VLC log file (though I expect to stream to Firefox/Chrome clients). I am using ffmpeg 4.2.2-static installed on an AWS EC2 with AMI2 Linux.
I have four jpgs named image11.jpg, image12.jpg, image21.jpg, image22.jpg - The images look near identical as only the timestamp in top left changes.
The following command creates 1.mp4, using image11.jpg and image12.jpg, each image displayed for 30 seconds, total duration of the mp4 is 1 minute. It plays like expected.
ffmpeg -y -framerate 1/30 -f image2 -i image1%1d.jpg -c:v libx264 -vf "fps=1,format=yuvj420p" 1.mp4
I then convert 1.mp4 to an mpegts file, creating 1.ts. It plays like expected.
ffmpeg -y -i 1.mp4 -c:v libx264 -vbsf h264_mp4toannexb -flags -global_header -f mpegts 1.ts
I repeat the above steps except specific to image21.jpg and image22.jpg, creating 2.mp4 and 2.ts
ffmpeg -y -framerate 1/30 -f image2 -i image1%1d.jpg -c:v libx264 -vf "fps=1,format=yuvj420p" 2.mp4
ffmpeg -y -i 1.mp4 -c:v libx264 -vbsf h264_mp4toannexb -flags -global_header -f mpegts 2.ts
Thus now I have 1.mp4, 1.ts, 2.mp4, 2.ts and all four play individually just fine.
Using ffprobe I can confirm their duration is 60seconds, for example:
ffprobe -i 1.ts -v quiet -show_entries format=duration -hide_banner -print_format json
My m3u8 playlist follows:
#EXTM3U
#EXT-X-VERSION:4
#EXT-X-PLAYLIST-TYPE:VOD
#EXT-X-MEDIA-SEQUENCE:1
#EXT-X-TARGETDURATION:60.000
#EXTINF:60.0000,
1.ts
#EXTINF:60.000,
2.ts
#EXT-X-ENDLIST
Can anyone advise where I am going wrong?
VLC Error Log (though I expect to play via web browser)
I have researched the process using these (and other pages) as a guide:
How to create a video from images with ffmpeg
convert from jpg to mp4 by ffmpeg
ffmpeg examples page
FFMPEG An Intermediate Guide/image sequence
How to use FFmpeg to convert images to video
Take a look at the start_pts/start_time in the ffprobe -show_streams output, my guess is that they all start at zero/near-zero which will cause playback to fail after your first segment.
You can still produce them independently but you will want to use something like -output_ts_offset to correctly set the timestamps for subsequent segments.
The following solution works well for me. I have tested it uninterrupted for more than two hours and believe it ticks all my boxes. (Edited because I forgot the all important -re tag)
ffmpeg will loop continuously, reading test.jpg and stream it to my RTMP server. When my camera posts an image every 30seconds, I copy the new image on top of the existing test.jpg which in effect changes what is streamed out.
Note the command below is all one line, I have put new lines in to assist reading and The order of the parameters are important - the loop and fflags genpts for example must appear before the -i parameter
ffmpeg
-re
-loop 1
-fflags +genpts
-framerate 1/30
-i test.jpg
-c:v libx264
-vf fps=25
-pix_fmt yuvj420p
-crf 30
-f fifo -attempt_recovery 1 -recovery_wait_time 1
-f flv rtmp://localhost:5555/video/test
Some arguments explained:
-re implies play in real time
loop 1 (1 turns the loop on, 0 off)
-fflags +genpts is something I only half understand. PTS I believe is the start/end time of the segment and without this flag, the PTS is reset to zero with every new image. Using this arguement means I avoid EXT-X-DISCONTINUITY when a new image is served.
-framerate 1/30 means one frame for 30seconds
-i test.jpg is my image 'placeholder'. As new images are received via a separate script, it overwrites this image. When combined with loop it means the ffmpeg output will reference the new image.
-c:v libx264 is for H264 video output formating
-vf fps=25 Removing this, or using a different value resulted in my output stream not being 30seconds.
-pix_fmt yuvj420p (sometimes I have seen yuv420p referenced but this did not work on my environment). I believe there are different jpg colour palettes and this switch ensures I can process a wider choice.
-crf 30 implies highest quality image, lowest compression (important for my client)
-f fifo -attempt_recovery 1 -recovery_wait_time 1 -f flv rtmp://localhost:5555/video/test is part of the magic to go with loop. I believe it keeps the connection open with my stream server, reduces the risk of DISCONTINUITY in the play list.
I hope this helps someone going forward.
The following links helped nudge me forward and I share as it might help others to improve upon my solution
Creating a video from a single image for a specific duration in ffmpeg
How can I loop one frame with ffmpeg? All the other frames should point to the first with no changes, maybe like a recusion
Display images on video at specific framerate with loop using FFmpeg
Loop image ffmpeg HLS
https://trac.ffmpeg.org/wiki/Slideshow
https://superuser.com/questions/1699893/generate-ts-stream-from-image-file
https://ffmpeg.org/ffmpeg-formats.html#Examples-3
https://trac.ffmpeg.org/wiki/StreamingGuide

use ffmpeg to create Zoom virtual background video

Using ffmpeg, I created a video from a list of PNG images to use as a Zoom virtual background. However, when I try to upload it to Zoom, it says "Unsupported format. Please upload a different file." Here is the command that I used:
ffmpeg -framerate 1 -i img%04d.png output.mp4
I get the same error if I try to output a .mov file. Am I missing some option in the ffmpeg command?
PNGs store pixel color data as RGB values. Videos store color data as YUV. However, when converting an RGB input, ffmpeg chooses a YUV format which is incompatible with most players (it does this to preserve full signal fidelity). The user has to set a compatible pixel format with a reduced chroma resolution. Also, framerate 1 isn't compatible by some players, so duplicate frames to increase output framerate.
ffmpeg -framerate 1 -i img%04d.png -r 5 -pix_fmt yuv420p output.mp4

ffmpeg: Combine a audio (.wav) and video in (.rgb) format

I want to synchronously play audio (.wav) file and video which is provided to me in rgb format.
The rgb file contains all the rgb images in the video frames. How can I combine rgb file and audio using ffmeg to get output video which can be played on vlc player?
Input 1 : audio.wav
Input 2 : allimages.rgb
Output : A video file which can be played in vlc player.
I was looking at ffmpeg documentation but couldn't find anything for rgb input. It would be great help if you can provide the ffmpeg command for doing above.
Thanks
The closest I got with this is using below command, but I see Green and Pink colors in my video after I play it. I think I am missing something in the ffmpeg command. Can anyone tell me what is wrong in above command and help to improve video quality and remove green and pink colors?
ffmpeg -s 480x270 -r 15 -pix_fmt gbrp -i /Users/sandeep/Downloads/Videos/input.rgb -c:v libx264 -y output.mp4

How to Add Font size in subtitles in ffmpeg video filter

I'm using this command to crop,scale, and then add subtitles as overlay
ffmpeg -i input.avi -vf "[in]crop=in_w:in_h-20:0:0 [crop]; [crop]scale=320:240 [scale];[scale]subtitles=srt.srt" -aspect 16:9 -vcodec libx264 -crf 23 oq.mp4
how can we set font size/color of subtitle ?
There are two methods to use subtitles: hardsubs and softsubs.
Hardsubs
The subtitles video filter can be used to hardsub, or burn-in, the subtitles. This requires re-encoding and the subtitles become part of the video itself.
force_style option
To customize the subtitles you can use the force_style option in the subtitles filter. Example using subtitles file subs.srt and making font size of 24 with red font color.
ffmpeg -i video.mp4 -vf "subtitles=subs.srt:force_style='Fontsize=24,PrimaryColour=&H0000ff&'" -c:a copy output.mp4
force_style uses the SubStation Alpha (ASS) style fields.
PrimaryColour is in hexadecimal in Blue Green Red order. Note that this is the opposite order of HTML color codes. Color codes must always start with &H and end with &.
Aegisub
Alternatively, you can use Aegisub to create and stylize your subtitles. Save as SubStation Alpha (ASS) format as it can support font size, font color, shadows, outlines, scaling, angle, etc.
Softsubs
These are additional streams within the file. The player simply renders them upon playback. More flexible than hardsubbing because:
You do not need to re-encode the video.
You can have multiple subtitles (various languages) and switch between them.
You can toggle them on/off during playback.
They can be resized with any player worth using.
Of course sometimes hardsubs are needed if the device or player is unable to utilize softsubs.
To mux subtitles into a video file using stream copy mode:
ffmpeg -i input.mkv -i subtitles.ass -codec copy -map 0 -map 1 output.mkv
Nothing is re-encoded, so the whole process will be quick and the quality and formats will be preserved.
Using SubStation Alpha (ASS) subtitles will allow you to format the subtitles however you like. These can be created/converted with Aegisub.
Also see
subtitles video filter documentation
How to burn subtitles into the video
How to convert subtitle from SRT to ASS format
From the documentation you can use a srt subtitle file and change the size of the font by putting ASS style format KEY=VALUE pairs separated by ,. So,
ffmpeg -i input.mp4 -vf subtitles=sub.srt:force_style='FontName=DejaVu Serif,FontSize=24' -vcodec libx264 -acodec copy -q:v 0 -q:a 0 output.mp4
will put the subtitles with DejaVu font and size 24 while keeps the quality of the video. I've tried myself and it worked.
FFmpeg is good sometimes, but some videos can becoms blurry. I recoomend to compare with VLC player.
media>Convert/Save
I find the default settings for Profile Video for iPad HD/iPhone/PSP is very good. Or I can reduce the Biterates from 700kb/s to 350kb/s to make smaller.

Resources