I have a series of files from the same source and so of the exact same format in every way that I'm concatenating with FFMPEG
file1.mov
file2.mov
file3.mov
This is very fast and is working fine however no I want to take optional intro file (from many different source and of many different types) and convert that to match the others before joining.
intro.mp4
How do I do this with FFMPEG?
Does this give me everything I need?
ffprobe -select_streams a:0 -show_entries \
stream=codec_name,channels -of default=nw=1:nk=1 -v 0 ./file1.mov
ffprobe -select_streams v:0 -show_entries \
stream=codec_name,width,height,r_frame_rate,pix_fmt \
-of default=nw=1:nk=1 -v 0 ./file1.mov
So with that I can just:
ffmpeg -i intro.mp4 \
-c:v h264 -s 1280x720 -pix_fmt yuv420p -framerate 30/1 \
-c:a pcm_s16le -ca 1 intro.mov
and then merge it seamlessly to the rest?
ffmpeg -f concat -safe 0 -i videos.txt -c copy merged.mov -y
The answer is of course "no", hence the request for your support.
The audio is fine when files 1, 2 & 3 are merged but is too fast when the intro + 1, 2 & 3 are merged. The converted intro file always plays fine on it's own after the conversion and after the merge, but the others play audio too fast after the merge.
What am I missing?
UPDATE:
So in the end this worked for the intro:
ffmpeg -i intro.mp4 \
-c:v h264 -s 1280x720 -pix_fmt yuv420p -framerate 30 \
-c:a pcm_s16le -ac 1 -b:a 512k -ar 32000 intro.mov -y
I suspect intro.mp4 has higher audio sampling rate than the rest, and -f concat is setting the virtual input file's audio sampling rate to that of the intro.mov and running with it.
To fix this run your probe again and check the audio sampling rate (I can't remember off the top of head what it's called in the output, but could be "sample_rate"). Let this number of fs then transcode intro.mp4
ffmpeg -i intro.mp4 \
-c:v h264 -s 1280x720 -pix_fmt yuv420p -framerate 30/1 \
-c:a pcm_s16le -ar fs -ca 1 intro.mov
Replace fs with the rate you found. If it just played faster, all the other audio formats are compatible.
Related
ffmpeg -ss 00:11:47.970 -t 3.090 -i "file.mkv" -ss 00:11:46.470 -t 1.500 -i "file" -ss 00:11:51.060 -t 0.960 -i "file.mkv" -an -c:v libvpx -crf 31 -b:v 10000k -y -filter_complex "[0:v:0][0:a:0][1:v:0][1:a:0][2:v:0][2:a:0]concat=n=3:v=1:a=1[outv][outa];[outv]scale='min(960,iw)':-1[outv];[outv]subtitles='file.srt'[outv]" -map [outv] file_out.webm -map [outa] file.mp3
I have a filter where take three different points in a file concat them together and scale them down this part works
Im looking to see how to add to the filter_complex a sub burn in step rendering the subs from the exact timings usings a file that I specify when I use the above code it doesn't work
The subtitles filter is receiving a concatenated stream. It does not contain the timestamps from the original segments. So the subtitles filter starts from the beginning. I'm assuming this is the problem when you said, "it doesn't work".
The simple method to solve this is to make temporary files then concatenate them.
Output segments
ffmpeg -ss 00:11:47.970 -t 3.090 -copyts -i "file.mkv" -filter_complex "scale='min(960,iw)':-1,subtitles='file.srt',setpts=PTS-STARTPTS;asetpts=PTS-STARTPTS" -crf 31 -b:v 10000k temp1.webm
ffmpeg -ss 00:11:46.470 -t 1.500 -copyts -i "file.mkv" -filter_complex "scale='min(960,iw)':-1,subtitles='file.srt',setpts=PTS-STARTPTS;asetpts=PTS-STARTPTS" -crf 31 -b:v 10000k temp2.webm
ffmpeg -ss 00:11:51.060 -t 0.960 -copyts -i "file.mkv" -filter_complex "scale='min(960,iw)':-1,subtitles='file.srt',setpts=PTS-STARTPTS;asetpts=PTS-STARTPTS" -crf 31 -b:v 10000k temp3.webm
The timestamps are reset when fast seek is used (-ss before -i). -copytswill preserve the timestamps so the subtitles filter knows where to start the subtitles.
Make input.txt:
file 'temp1.webm'
file 'temp2.webm'
file 'temp3.webm'
Concatenate with the concat demuxer:
ffmpeg -f concat -i input.txt -c copy output.webm
-c copy enables stream copy mode so it avoids re-encoding to concatenate.
I'm trying to create a website to stream some videos. For each video, I extract video, audio and subtitles in 3 different folders. It happens that a video has multiple audio tracks and multiple subtitles. I did a lot of research and I don't know how to add all of them in the manifest. Right now, I use this command:
ffmpeg -f webm_dash_manifest \
-i video1.mp4 -f webm_dash_manifest \
-i video2.mp4 -f webm_dash_manifest \
-i audio1.webm -f webm_dash_manifest \
-i audio2.webm -f webm_dash_manifest \
-i subtitles.vtt \
-c copy -map 0 -map 1 -map 2 -map 3 \
-f webm_dash_manifest -adaptation_sets "id=0,streams=v id=1,streams=a" manifest.mpd
My two videos have different resolutions and bitrates, and it works perfectly. But I don't get any subtitles and my two audio tracks are considered like one same audio track which has two different bitrates (just like videos). I think I should have many adaptation_sets, but I don't know how to create them.
How can I create that manifest the right way?
After a few days, I found the solution.
My goal is to convert a video into mpeg-dash content which is really great for streaming.
I will encode video to h264, audio to aac, and subtitles to webvtt.
It's good settings for a large browser compatibility.
vp9 is really nice too but too long to encode for me.
Tools required:
ffmpeg: https://www.ffmpeg.org/download.html
mp4dash & mp4fragment: https://www.bento4.com/downloads/
Let's suppose we have a 1080p video file "video.mkv" with these streams:
0: video stream
1: audio stream, it language
2: audio stream, en langugage
3: subtitle stream, it language
4: subtitle stream, en language
1. Extracting differents streams
1.1 Video
I extract and transcode video stream to differents resolutions and bitrates:
ffmpeg -i video.mkv -an -sn -c:0 libx264 -x264opts 'keyint=24:min-keyint=24:no-scenecut' -b:v 5300k -maxrate 5300k -bufsize 2650k -vf 'scale=-1:1080' tmp/video/video-1080.mp4
ffmpeg -i video.mkv -an -sn -c:0 libx264 -x264opts 'keyint=24:min-keyint=24:no-scenecut' -b:v 2400k -maxrate 2400k -bufsize 1200k -vf 'scale=-1:720' tmp/video/video-720.mp4
ffmpeg -i video.mkv -an -sn -c:0 libx264 -x264opts 'keyint=24:min-keyint=24:no-scenecut' -b:v 600k -maxrate 600k -bufsize 300k -vf 'scale=-1:360' tmp/video/video-360.mp4
1.2 Audio
ffmpeg -i video.mkv -map 0:1 -ac 2 -ab 192k -vn -sn tmp/audio/audio-it.mp4
ffmpeg -i video.mkv -map 0:2 -ac 2 -ab 192k -vn -sn tmp/audio/audio-en.mp4
1.3 Subtitle
ffmpeg -i video.mkv -map 0:3 -vn -an tmp/subtitle/subtitle-it.vtt
ffmpeg -i video.mkv -map 0:4 -vn -an tmp/subtitle/subtitle-en.vtt
You can use the "-loglevel warning" option to see less informations.
2. Fragment video and audio
2.1 Video
mp4fragment tmp/video/video-1080.mp4 tmp/video/f-video-1080.mp4
mp4fragment tmp/video/video-720.mp4 tmp/video/f-video-720.mp4
mp4fragment tmp/video/video-360.mp4 tmp/video/f-video-360.mp4
2.2 Audio
mp4fragment tmp/audio/audio-it.mp4 tmp/audio/f-audio-it.mp4
mp4fragment tmp/audio/audio-en.mp4 tmp/audio/f-audio-en.mp4
3. Split files and create the dash manifest
mp4dash --mpd-name=manifest.mpd tmp/video/f-video-1080.mp4 tmp/video/f-video-720.mp4 tmp/video/f-video-360.mp4 tmp/audio/f-audio-it.mp4 tmp/audio/f-audio-en.mp4 \[+format=webvtt,+language=it\]tmp/subtitle/subtitle-it.vtt \[+format=webvtt,+language=en\]tmp/subtitle/subtitle-en.vtt
You can now delete the tmp folder
rm -rf tmp
(and your source file if you don't need it anymore)
You have now your mpeg-dash content which can be streamed. You have to serve your files (allow cors and enable byte range request).
I use angular and rx-player as player. I can switch language, subtitles and the video quality is adaptative to the client's bandwidth !
Rx-player: https://github.com/canalplus/rx-player
I've got an odd issue that's been bugging me for a while. I'm converting another format to video using FFmpeg; the conversion takes place prior and is fed into FFmpeg to be finally converted to an mp4.
Oddly, I seem to be getting a little click at the start of the resulting video; it's not present in the original audio but shows up in the final video.
Here is the sample audio. You'll notice that it has no pop at the start.
Here is the raw video input.
Here is the video my command is generating.
Here is the command I'm using to reproduce the issue (the actual conversion takes place in a Python script feeding FFmpeg the video via stdin and the audio via a temp file)
cat debug_raw_video.bin| ffmpeg -hide_banner -loglevel info -y -s 256x192 -r 30 -f rawvideo -thread_queue_size 600 -pix_fmt rgb8 -i pipe:0 -f s16le -ar 11025 -ac 1 -guess_layout_max 0 -i ./debug_audio.wav -vcodec libx264 -pix_fmt yuv420p -movflags faststart -acodec aac -strict experimental -vf scale=512:384:flags=neighbor -threads 0 -preset medium -tune animation ./out.mp4
FFmpeg version:
ffmpeg version 2.8.15 Copyright (c) 2000-2018 the FFmpeg developers
Also have the same issue with this version:
ffmpeg version 3.3.4-static http://johnvansickle.com/ffmpeg/ Copyright (c) 2000-2017 the FFmpeg developers
Why am I getting a little click/pop at the beginning? I've been trying to figure this out for quite a while.
It appears you're specifying that the input audio is raw, but it's not:
$ file debug_audio.wav
debug_audio.wav: RIFF (little-endian) data, WAVE audio, Microsoft PCM, 16 bit, mono 11025 Hz
So I imagine the click you're hearing is the wav header being processed as audio. If I remove the related options, -f s16le and -ar 11025, ffmpeg correctly determines that the audio input is in wav format and produces a click-less output:
cat debug_raw_video.bin | ffmpeg -hide_banner -loglevel info -y -s 256x192 -r 30 -f rawvideo -thread_queue_size 600 -pix_fmt rgb8 -i pipe:0 -ac 1 -i ./debug_audio.wav -vcodec libx264 -pix_fmt yuv420p -movflags faststart -acodec aac -strict experimental -vf scale=512:384:flags=neighbor -threads 0 -preset medium -tune animation ./out.mp4
Let's say I want to cut part of the mp4 video and resize it from 1280x720 to 854x480.
My command looks like this:
ffmpeg -ss 45 -i source.mp4 -ss 10 -to 20 \
-acodec aac -ar 44100 -ac 2 -c:v libx264 \
-crf 26 -vf scale=854:480:force_original_aspect_ratio=decrease,pad=854:480:0:0,setsar=1/1,setdar=16/9 \
-video_track_timescale 29971 -pix_fmt yuv420p \
-map_metadata 0 -avoid_negative_ts 1 -y dest.mp4
The problem is, when I don't use option avoid_negative_ts, resulting video has some issues with time bases etc, therefore it cannot be later converted by other libs, for example Swift's AVFoundation.
But when I use this option - video does not start with keyframe.
By using ffprobe I see start_time=0.065997 or other times other than 0.
How can I use option avoid_negative_ts and have a video that starts with keyframe?
I'm successfully streaming silent video with music added from my Raspberry Pi (Raspbian) to YouTube via ffmpeg, with the help of this GitHub gist and this post:
raspivid -o - -t 0 -vf -hf -w 1280 -h 720 -fps 25 -b 4000000 | \
ffmpeg -i music.wav \
-f h264 -i - -vcodec copy -acodec aac -ab 128k -g 50 -strict experimental \
-f flv rtmp://a.rtmp.youtube.com/live2/STREAMKEY
The last step of my project to add a transparent, full width/height png overlay to the video (1280x720 size in my case). I've seen a few related answers such as this one and this one.
With the added complexity of piping in a camera feed, mixing in an audio source and outputting to a video stream, I haven't succeeded in adding the image overlay. Where/how would I add a transparent image overlay in the example above?
The ffmpeg part will be
ffmpeg -i music.wav \
-f h264 -i - -i overlay.png
-filter_complex "[1][2]overlay"
-vcodec libx264 -preset ultrafast -tune zerolatency -acodec aac -ab 128k -g 50 -strict experimental \
-f flv rtmp://a.rtmp.youtube.com/live2/STREAMKEY
Since you're altering the video contents, copy can't be used, and the video has to be re-encoded.