ffmpeg output doesn't play on multiple devices - ffmpeg

I have read the other stackoverflow posts regarding this topic so I am fairly certain this is not exact duplicate.
ffmpeg exports a video that seems to only play on select players. I want to export a video that plays on iphone/mac/general players. I have seen the suggestions for the -pix_fmt yuv420p tag but this does not seem to work anymore - I read that Mac has since changed their systems that makes it not compatible anymore.
I am running:
ffmpeg -start_number 1 -framerate 4 -pix_fmt yuv420p -i screen%01d.png output.mp4
This all works fine and I can see the video by doing:
ffplay output.mp4
But I would like to be able to transfer this to mobile or general playback, any way to do this, ideally using ffmpeg? I'd rather not use two tools to do 1 job.
Works on gmail
Doesn't work on QuickTime Player
Doesn't work on Flip Player
Doesn't work on iPhone

Order of options is important. It should be,
ffmpeg -start_number 1 -framerate 4 -i screen%01d.png -pix_fmt yuv420p output.mp4
Now pix_fmt is set as an output option. Originally, it was trying to force the input format, but since PNGs are images with metadata and not raw pixel data, th eoption had no effect. Additionally, for web use, it's good to also set -movflags +faststart as an output option.
Note that old versions of VLC couldn't play videos with framerate < 6. Could possibly be an issue with a few other players as well. Add -r 8 as an output option to avoid that.

Related

Morphing images into each other using ffmpeg

I have a directory containing jpegs. Using ffmpeg I want to read the images and create a video out of it. The images should morph into each other, every image should be visible ~2 seconds.
So far, I came up with this:
ffmpeg -i %d.jpg -c:v libx264 -pix_fmt yuv420p ./bla.mp4
Now what I'm missing is the morping part. I know there is a "morph" filter, however no matter how I try it, I either get an error that No such filter: 'morph' (even tho I compiled ffmpeg on own, making sure the needed libs are included) or I get a random 0xfff something error.

Cross device h264 compatible html5 video

I'm trying to serve a large video of timelapses generated from a series of images.
Using FFmpeg I have encoded the video as an h264 mp4.
ffmpeg -framerate 24 -i "/app/download/%d.jpeg" -c:v libx264 -crf 23 -preset fast -tune animation -report -vf "format=yuv420p" -y /app/output.mp4
I'm running into compatibility issues where the videos are not playable on iOS (safari) as well as on Windows (all browsers except chrome). Where I'm getting the following error:
Error Code: NS_ERROR_DOM_MEDIA_FATAL_ERR (0x806e0005) Details: mozilla::MediaResult __cdecl mozilla::WMFVideoMFTManager::ValidateVideoInfo(void): Can't decode H.264 stream because its resolution is out of the maximum limitation
See the full FFmpeg log here: https://pastebin.com/QUEPh3q2
I'm just looking for some resource or knowledge of how to encode my media for maximum compatibility while still preserving high quality and resolution.
Problem:
Which options I should be using in FFmpeg to maximize compatibility?
From comments: "My videos are maximally of size 4056x3040 or 3040x4056".
I don't have Apple device(s) but you might be hitting some image size limitation on Windows.
Firefox uses the built-in Windows H264 decoder where the maximum height is 2304.
Replace the old command:
ffmpeg -framerate 24 -i "/app/download/%d.jpeg" -c:v libx264 -crf 23 -preset fast -tune animation -report -vf "format=yuv420p" -y /app/output.mp4
With this new one:
ffmpeg -framerate 24 -i "/app/download/%d.jpeg" -vf scale=3069:2300,setsar=1:1 -c:v libx264 -pix_fmt yuv420p -profile:v high -crf 23 -preset fast -movflags +faststart -report -y /app/output.mp4
The above command changes the size to 3069x2300 (within Windows resolution limits) but I recommend a smaller size like 1441x1080 for maximum device / O.S / browser compatibility.
I would leave out -tune animation, add it back if its removal affects your specific image quality.
Now added is +faststart which allows the MP4 header to be at front of file (usually is placed last at back) meaning playback can begin without first downloading all videos just to reach header data (which has the decoder settings needed to begin playback).
I think your bigger issue will be trying to send 4056x3040 video over mobile networks. You're going to have lots of stalling and poor playback over many types of connections that cannot support the bandwidth you'll need. Nor does a mobile device have a big enough screen to actually playback the video dimensions you would be sending.
I'd suggest you look at HLS streaming - and adaptive bitrates. That way, you can create your huge version, a 1080p version, a 720p version (etc.) the video player will deliver the correctly sized video to the device - no wasted data/pixels, fewer stalls, and it still looks great.

How to remove ffmpeg artifacts in the output timelapse video?

I used a number of jpeg files to create a timelapse video with ffmpeg. Individually they are visually ok.
These source images are captured by a mirrorless DSL camera in JPEG format.
If I upload the timelapsevideo to youtube, the video is clear and without any artifact: https://www.youtube.com/watch?v=Qs-1ahCrb0Y
However if I play the video file locally on MacOS in Photo or Quicktime apps or in iOS, there are artifacts in the video. Here are some of the examples:
1.
2.
This is the ffmpeg command I used to generate the video:
ffmpeg -framerate 30 -pattern_type glob -i "DSCF*.JPG" -pix_fmt yuv420p -profile baseline output.mp4
What additional parameter I can use to remove those artifacts?
Edit:
File info
The video plays without issue in VLC.
The H.264 codec standard defines levels. The level represents the resources required by a decoder to smoothly process a stream. Usually, levels are only pertinent for hardware players. However, some software players may have been designed with a level ceiling. Apparently, that's the case with Apple's players.
Your video's frame size is 6000x4000 for which the player has to support level 6.0, which is a recent addition to the standard (~2 years). I suggest you halve the resolution,
ffmpeg -framerate 30 -pattern_type glob -i "DSCF*.JPG" -vf scale=iw/2:ih/2,format=yuv420p -profile baseline out.mp4

ffmpeg record screen file formats

Good day,
I'm currently writing a bash script which records the screen under certain conditions. The problem is that only avi works as a file extension for recording the screen. This script is going to be used on an Raspberry Pi and currently I get on a decent virtual machine only 10-20 fps (goal would be around 30 fps). I think .avi is not suited for my project. But .mpeg and .mp4 are not working for recording. I tried recording with .avi and then converting it in .mp4, but I have limited memory and .avi ist just too big in size. I use currently the following command:
ffmpeg -f x11grab -y -r 30 -s 960x750 -i :0.0+0,100 -vcodec huffyuv ./Videos/out_$now.avi
//$now is the current date and time
So I wanted to know if I need some special packages from ffmpeg to record with for example .mp4 or if there are other file formats available for ffmpeg screen recording.
Edit:
I found that the codec libx264 for mp4 works, but the fps drop until they hit5 fps, which is definetly too low. The recorded video appeared like being a fast forward version of the recorded screen.
With mpeg4 for mpeg I reached over 30 fps, but the video qualitywas very bad.
It appears that even my big avi-files look like being played fast forward. Is there something I do wrong?
Is there a good middle way, where I get a decend video quality, good fps (20+) and a file which isn't too big?
Edit 2:
I tried recording it with .avi and converting it afterwards. Just converting with ffmpeg -i test.avi -c:a aac -b:a 128k -c:v libx264 -crf 23 output.mp4
resulted in the same framedrops as if I was recording with .mp4. But when I cut a littlebit of the beginning of the video and named the outputfile .mp4, the size became much smaller. But when I started the cutting at 0:00:00 (so tried just converting), it just changed the file format without converting it (so the size stayed the same). Any ideas?

Produce waveform video from audio using FFMPEG

I am trying to create a waveform video from audio. My goal is to produce a video that looks something like this
For my test I have an mp3 that plays a short clipped sound. There are 4 bars of 1/4 notes and 4 bars of 1/8 notes played at 120bpm. I am having some trouble coming up with the right combination of preprocessing and filtering to produce a video that looks like the image. The colors dont have to be exact, I am more concerned with the shape of the beats. I tried a couple of different approaches using showwaves and showspectrum. I cant quite wrap my head around why when using showwaves the beats go past so quickly, but using showspectrum produces a video where I can see each individual beat.
ShowWaves
ffmpeg -i beat_test.mp3 -filter_complex "[0:a]showwaves=s=1280x100:mode=cline:rate=25:scale=sqrt,format=yuv420p[v]" -map "[v]" -map 0:a output_wav.mp4
This link will download the output of that command.
ShowSpectrum
ffmpeg -i beat_test.mp3 -filter_complex "[0:a]showspectrum=s=1280x100:mode=combined:color=intensity:saturation=5:slide=1:scale=cbrt,format=yuv420p[v]" -map "[v]" -an -map 0:a output_spec.mp4
This link will download the output of that command.
I posted the simple examples because I didn't want to confuse the issue by adding all the variations I have tried.
In practice I suppose I can get away with the output from showspectrum but I'd like to understand where/how I am thinking about this incorrectly. Thanks for any advice.
Here is a link to the source audio file.
What showwaves does is show the waveform in realtime, and the display window is 1/framerate i.e. if the video output is 25 fps, then each frame shows the waveform of 40 ms of audio. There's no 'history' or 'memory' so you can't (directly) get a scrolling output like it seems your reference video shows.
The workaround for this is to use the showwavespic filter to produce a single frame showing the entire waveform at a high enough horizontal resolution. Then do a scrolling overlay of that picture over a desired background, at a speed such that the scroll lasts as long as the audio.
Basic command template would be:
ffmpeg -loop 1 -i bg.png -loop 1 -i wavespic.png -i audio.mp3
-filter_complex "[0][1]overlay=W-w*t/mp3dur:y=SOMEFIXEDVALUE" -shortest waves.mp4
mp3dur above should be replaced with the duration of the audio file.

Resources