So I have a Raspberry Pi app that records output from the on-board camera. These files are recorded as H264. After a user presses a button I want to display a portion of that video with OMXPlayer. OMXPlayer always needs an MP4 container ( it always ignores FPS ).
I don't want to wrap the entire H264 into an MP4 as that takes too much time.
My solution would be trim the last 30 seconds and place into MP4 container. Can I do this in one step without copying the entire content of the H264 into the MP4 first?
I don't want to re-encode this and I'm looking for the fastest operation possible.
This will be fast, just do a stream copy of the file to an mp4 container.
ffmpeg -i INPUTFILE -sseof 30 -c:v copy -c:a copy -pix_fmt yuv420p out.mp4
Related
I used a number of jpeg files to create a timelapse video with ffmpeg. Individually they are visually ok.
These source images are captured by a mirrorless DSL camera in JPEG format.
If I upload the timelapsevideo to youtube, the video is clear and without any artifact: https://www.youtube.com/watch?v=Qs-1ahCrb0Y
However if I play the video file locally on MacOS in Photo or Quicktime apps or in iOS, there are artifacts in the video. Here are some of the examples:
1.
2.
This is the ffmpeg command I used to generate the video:
ffmpeg -framerate 30 -pattern_type glob -i "DSCF*.JPG" -pix_fmt yuv420p -profile baseline output.mp4
What additional parameter I can use to remove those artifacts?
Edit:
File info
The video plays without issue in VLC.
The H.264 codec standard defines levels. The level represents the resources required by a decoder to smoothly process a stream. Usually, levels are only pertinent for hardware players. However, some software players may have been designed with a level ceiling. Apparently, that's the case with Apple's players.
Your video's frame size is 6000x4000 for which the player has to support level 6.0, which is a recent addition to the standard (~2 years). I suggest you halve the resolution,
ffmpeg -framerate 30 -pattern_type glob -i "DSCF*.JPG" -vf scale=iw/2:ih/2,format=yuv420p -profile baseline out.mp4
Good day,
I'm currently writing a bash script which records the screen under certain conditions. The problem is that only avi works as a file extension for recording the screen. This script is going to be used on an Raspberry Pi and currently I get on a decent virtual machine only 10-20 fps (goal would be around 30 fps). I think .avi is not suited for my project. But .mpeg and .mp4 are not working for recording. I tried recording with .avi and then converting it in .mp4, but I have limited memory and .avi ist just too big in size. I use currently the following command:
ffmpeg -f x11grab -y -r 30 -s 960x750 -i :0.0+0,100 -vcodec huffyuv ./Videos/out_$now.avi
//$now is the current date and time
So I wanted to know if I need some special packages from ffmpeg to record with for example .mp4 or if there are other file formats available for ffmpeg screen recording.
Edit:
I found that the codec libx264 for mp4 works, but the fps drop until they hit5 fps, which is definetly too low. The recorded video appeared like being a fast forward version of the recorded screen.
With mpeg4 for mpeg I reached over 30 fps, but the video qualitywas very bad.
It appears that even my big avi-files look like being played fast forward. Is there something I do wrong?
Is there a good middle way, where I get a decend video quality, good fps (20+) and a file which isn't too big?
Edit 2:
I tried recording it with .avi and converting it afterwards. Just converting with ffmpeg -i test.avi -c:a aac -b:a 128k -c:v libx264 -crf 23 output.mp4
resulted in the same framedrops as if I was recording with .mp4. But when I cut a littlebit of the beginning of the video and named the outputfile .mp4, the size became much smaller. But when I started the cutting at 0:00:00 (so tried just converting), it just changed the file format without converting it (so the size stayed the same). Any ideas?
I wants such a output video where audio of output is created using ffmpeg -filter_complex mechanism,
/usr/local/Cellar/ffmpeg/3.2.2/bin/ffmpeg
-i /uploads/videos/1487684390-lg9htt0RW2.mov
-i /uploads/audios/1487664761-SCPbo6Tkac.mp3
-filter_complex "
[0:a]atrim=0:8.70824980736,asetpts=PTS-STARTPTS[aud1];
[1:a]atrim=0:12.9567301273,asetpts=PTS-STARTPTS[aud2];
[0:a]volume=0.3,atrim=start=8.70824980736:21.6649799347,asetpts=PTS-STARTPTS[slow_aud];
[aud2][slow_aud] amerge=inputs=2[a_merged];
[0:a]atrim=start=21.6649799347:31.6410098076 [remaining_audio];
[aud1][a_merged][remaining_audio]concat=n=3:v=0:a=1[aout]"
-map 0:v -map "[aout]" -c:v copy -acodec mp3
/uploads/output/1487684390-lg9htt0RW2.mov
Original Audio Recorded Based On UTC timestamp Vs Original Video Recorded Based on UTC timestamp
13:00-------- Original Event Audio -------- 13:20
12:50------------- Event Video Recorded --------------13:30
This is my requirement
So the audio of the output video should contains
First 10 seconds(12:50 - 13:00) are Audio of Event Video Recorded
Next 20 seconds (13:00 -13:20) are merged audio(Original Audio+ Original Video where Original video's audio volume is .3)
Remaining 10 seconds(13:21-13:30) of video will play remaing audio of video
What I am getting by above commands
First 10 seconds(12:50 - 13:00) are Audio of Event Video Recorded Achieved
Next 20 seconds (13:00 -13:20) are merged audio(Original Audio+ Original Video where Original video's audio volume is .3) Achieved
Remaining 10 seconds(13:21-13:30) of video will play remaining audio of video Not Achieved
You haven't reset the timestamps of the remaining audio, as the concat filter requires. So, it should be
[0:a]atrim=start=21.6649799347:31.6410098076,asetpts=PTS-STARTPTS[remaining_audio];
A shorter way of achieving the same result is
-filter_complex
"[1:a]adelay=12956.7301273|12956.7301273[mp3];
[0:a]volume=0.3:enable='between(t,8.70824980736,21.6649799347)'[vid];
[vid][mp3]amix[aout]"
I need a video file whose audio and video track duration are always the same. The file must contain an audio track even if source audio has no audio track. How do I tell ffmpeg to add a silent audio track when source has no audio trace. Also, if source has an audio track that is a different duration than the video, I need ffmpeg to append silent audio to make both output audio and video the same duration. Is this possible in one line with ffmpeg?
The command below will add a silent track of the same length* as the video, if there is no audio** in the source file.
ffmpeg -i video -f lavfi -i anullsrc=cl=1 -shortest -c:v libx264 -c:a aac output.mov
*Since video frame duration and audio frame duration aren't usually identical, the lengths won't be exactly the same.
**when map is not specified, ffmpeg selects a single audio stream from among the inputs with the highest channel count. If there are two or more streams with same no. of channels, it selects stream with lowest index. anullsrc here has one channel, so it will be passed over except when source has no audio.
I would like to transcode video stream using ffmpeg tool and change only the video stream resolution, i.e. the video and audio parameters should remain the same.
According to the man page of the ffmpeg the following command line should provide the desired result:
ffmpeg -i input.mp4 -vcodec copy -acodec copy -s WxH output.avi
The Video codec of the input stream is compatible with avi container.
The actual result is that the resolution remains unchanged and it seems that the stream is just repacked in avi container.
The resolution of the output stream is changed successfully without -vcodec copy option, but the video codec is changed: h264 (Constrained Baseline) - > mpeg4 (Simple Profile).
When you copy a video stream, you cannot change any of its paramters, sinceā¦ well, you're copying it. ffmpeg won't touch it in any way, so it can't change the dimensions, frame rate, et cetera.
Also, ffmpeg always chooses a default video codec if you don't specify one. For AVI files, that's mpeg4.
If you want H.264 video, choose -c:v libx264 instead (or -vcodec libx264 which is the same). If you need to keep the original profile, use -profile:v baseline.
Two things:
When you change the size, you will recode the video. This lowers the quality and might considerably harm the video. To compensate for this, you might need to set a higher quality level. You do this by setting the Constant Rate Factor to anything below the default of 23, e.g. with -crf 20. Experiment and see how your video looks like. If you have the time, add the -preset slow (or slower, veryslow), which will give you better compression.
Not that it matters in your case, since your input uses the Constrained Baseline profile, but note that H.264 in AVI is not properly supported, at least when using B pictures. Baseline doesn't support B pictures though, so you should be fine. It could happen that file can't be played back on some devices or players if you use the Main profile or anything above. I would rather mux it into an MP4 or MKV container, especially if your input file is MP4 anyway.