I recently asked baout how I could download segments of an online m3u8 file, and someone pointed out that this could be accomplished via ffmpeg:
ffmpeg -i [LINK] -codec copy [OUTPUT FILE] #downloads only audio segments;
ffmpeg -i [LINK] -bsf:a aac_adtstoasc -vcodec copy -c copy -crf 50 [OUTPUT] #downloads audio and video segments
For those who aren't familiar, m3u8 is formatted kinda of like a "playlist", with an m3u8 file pointing to a bunch of smaller "segments" which are pieced together to form the whole of the video. As a result, it's completely possible to halt the above commands partway through their execution and still produce a watchable video (i.e. one that will be interpreted correctly by video editors).
I'm wondering if there's a built-in method with ffmpeg that allows me to grab segments N-M of a given m3u8. If there are methods outside of ffmpeg, feel free to mention them as well. Thanks for the help.
After having looked into it, I can say that this isn't possible via ffmpeg. You could theoretically use the -ss and -t parameters to specify a starting point and duration, but ffmpeg appears to look at every clip up until the specified endpoint, making the download process prohibitively long.
If you want to download only a specific number of segments, you need to look at the m3u8 file, find its associated media list, and download segments from that media list.
Related
I am making a datamoshing program in C++, and I need to find a way to remove one frame from a video (specifically, the p-frame right after a sequence jump) without re-encoding the video. I am currently using h.264 but would like to be able to do this with VP9 and AV1 as well.
I have one way of going about it, but it doesn't work for one frustrating reason (mentioned later). I can turn the original video into two intermediate videos - one with just the i-frame before the sequence jump, and one with the p-frame that was two frames later. I then create a concat.txt file with the following contents:
file video.mkv
file video1.mkv
And run ffmpeg -y -f concat -i concat.txt -c copy output.mp4. This produces the expected output, although is of course not as efficient as I would like since it requires creating intermediate files and reading the .txt file from disk (performance is very important in this project).
But worse yet, I couldn't generate the intermediate videos with ffmpeg, I had to use avidemux. I tried all sorts of variations on ffmpeg -y -ss 00:00:00 -i video.mp4 -t 0.04 -codec copy video.mkv, but that command seems to really bug out with videos of length 1-2 frames - while it works for longer videos no problem. My best guess is that there is some internal checker to ensure the output video is not corrupt (which, unfortunately, is exactly what I want it to be!).
Maybe there's a way to do it this way that gets around that problem, or better yet, a more elegant solution to the problem in the first place.
Thanks!
If you know the PTS or data offset or packet index of the target frame, then you can use the noise bitstream filter. This is codec-agnostic.
ffmpeg -copyts -i input -c copy -enc_time_base -1 -bsf:v:0 noise=drop=eq(pos\,11291) out
This will drop the packet from the first video stream stored at offset 11291 in the input file. See other available variables at http://www.ffmpeg.org/ffmpeg-bitstream-filters.html#noise
I have two time lapse videos with a rate of 1 fps. The camera took 1 Image every minute. Unfortunately it was missed to set the camera to burn/print on every image the time and date. I am trying to burn the time and date afterwards into the video.
I decoded with ffmpeg the two .avi files into ~7000 single images each and wrote a R script that renamed the files into their "creation" date (the time and date the pictures where taken). Then i used exiftoolto write those information "into" the file, into their exif data or meta data or whatever this is called.
The final images in the folder are looking like this:
2018-03-12 17_36_40.png
2018-03-12 17_35_40.png
2018-03-12 17_34_40.png
...
Is it possible to create a Video from these images again with ffmpeg or similiar with a "timestamp" in the video so you can see while watching a time and date stamp in the video?
I think this can be done in two steps.
First you create a mp4 file with the timestamp for every picture. This is a batch file which creates such video files.
#echo off
set "INPUT=C:\t\video"
for %%a in ("%INPUT%\*.png") do (
ffmpeg -i "%%~a" -vf "drawtext=text=%%~na:x=50:y=100:fontfile=/Windows/Fonts/arial.ttf:fontsize=25:fontcolor=white" -c:v libx264 -pix_fmt yuv420p "output/%%~na.mp4"
)
This will create mp4 for every png picture in the directory output/.
Explaining
For cycle is there loop via all *.png files and create *.mp4 files
The text is added via text overlay. It adds the filename without suffix via batch %%~na.
The text added here is the filename without the .png suffix.
Font used is arial.ttf (feel free to place any you want)
Next have x and y coordinates where you want to place the text
libx264 what x264 is used to encode
-pix_fmt yuv420p is for crappy players to be able to play it
Second step is to concate h.264 together using concat demuxer:
You need to create a file list like file_list.txt
file '2018-03-12 17_34_40.mp4'
duration 10
file '2018-03-12 17_35_40.mp4'
duration 10
file '2018-03-12 17_36_40.mp4'
duration 10
...
Examples can be found here.
Then you simply concat all the *.mp4 files - run in the output subdirectory:
ffmpeg -safe 0 -f concat -i file_list.txt -c copy output.mp4
Which will create one concat output.mp4 file.
If I understand correctly, you have some number of YUV frames and its information saved separately.
For example:
Let's say you originally have 10seconds video at 24 frames per seconds (constant, not variable).
So you have 240 yuv frames. In this case or similar, you can generate a video file via ffmpeg with a container format like mp4, with the resolution and frame rate information. So you will not need any metadata to make it back to video or to play, it will play normally in any decent player.
If you have only KEY FRAMES with different frame timing between them, yes you will need the metadata information and I'm not sure how can you do that.
Since you have the source, you can extract every single frame (24 frames in this example) per second and you are good to go. Similar answers already given for these, just look around.
Hope that helps.
I am attempting to use ffmpeg to record an HLS Livestream, described by input.m3u8. input.m3u8 contains a number of different bitrate streams: input_01.m3u8, input_02.m3u8, ...; which contain the actual mpeg-ts segmented video files. Frequently the number and quality of the available streams varies. I am trying to make this an automated process so that my co-workers can use it, but I need ffmpeg to always select the best available stream from the input.m3u8 file. Can anybody point me in the right direction on this?
Currently I use:
ffmpeg -n -i "http://path/input_0x.m3u8" -c copy "%path%\%FileName%
where %path% and %filename% are defined by the batch file calling ffmpeg and I manually look up the best bitrate stream.
I am starting with a high res video file and I would like to create 3 variants, low quality, mid quality, and high quality for mobile streaming. I want these mid/low/high variants to be segmented into ts pieces that the m3u8 file will be pointing that. Is there a way to do this in one line in ffmpeg?
I have successfully generated an m3u8 file and ts segments with ffmpeg, do I need to do this 3x and set specs for low/mid/high? If so, how do I get the singular m3u8 file to point to all variants as opposed to one for each variant?
This is the command I used to generate the m3u8 file along with the ts segments.
ffmpeg -i C:\Users\george\Desktop\video\hos.mp4 -strict -2 -acodec aac -vcodec libx264 -crf 25 C:\Users\user\Desktop\video\hos_Phone.m3u8
Yes, you need to encode all variants and generate the media playlists first (the playlists containing the segments).
If you want you can do it in one command since ffmepg supports multiple inputs/outputs. Eg:
ffmpeg -i input \
... [encoding parameters 1] ... output1 \
... [encoding parameters 2] ... output2 \
....[encoding parameters 3] ... output3
You must provide the variants in multiple qualities/bitrates but the aspect ratio should remain the same. Keeping the aspect ratio was initially mandatory but in the latest HLS authoring guide it's downgraded to a recommendation.
All variant streams must be keyframe aligned so set a GOP size using the -g option, disable scene-cut detection and use a segment duration hls_time which is a multiple of your keyframe interval.
Once you have all 3x m3u8 media playlist you can manually create the master playlist which points to each media playlist.
Example from the Apple HLS documentation, you must change the bandwidth, codecs, resolution and playlist filenames according to your own encoding options:
#EXTM3U
#EXT-X-VERSION:6
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=2855600,CODECS="avc1.4d001f,mp4a.40.2",RESOLUTION=960x540
medium.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=5605600,CODECS="avc1.640028,mp4a.40.2",RESOLUTION=1280x720
high.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1755600,CODECS="avc1.42001f,mp4a.40.2",RESOLUTION=640x360
low.m3u8
The Aspect ratio Does not have to be the same, that makes no sense.
How could you know what the client can play?
Aspect ratios are 4:3 for non-HD, 16:9 for a HD variants.
You don't want to do all your variants in one ffmpeg command if you
need segment times to be consistent.
Also watch transcoding downward, if you go from 1080 to 360, there
might be issues. One that I often get is that the audio degrades and
sounds weird. I try to go down no more than half, if I want high
quality.
#DavidC That hex is the codec version number.
I'm on Windows 7 and i have many .MP4 video that i want to convert on .flv. I have try ffmpeg and Free FLV converter, but each time the results are not what i'm looking for.
I want a video of same quality (or almost, looking good) and a more little size for the video, because right now, each time i have made a try, the video result is pretty bad and the video size just increase.
How can i have a good looking video, less in size and in .FLV ?
Thanks a lot !
First, see slhck's blog post on superuser for a good FFmpeg tutorial. FLV is a container format and can support several different video formats such as H.264 and audio formats such as AAC and MP3. The MP4 container can also support H.264 and AAC, so if your input uses these formats then you can simply "copy and paste" the video and audio from the mp4 to the flv. This will preserve the quality because there is no re-encoding. These two examples do the same thing, which is copying video and audio from the mp4 to the flv, but the ffmpeg syntax varies depending on your ffmpeg version. If one doesn't work then try the other:
ffmpeg -i input.mp4 -c copy output.flv
ffmpeg -i input.mp4 -vcodec copy -acodec copy output.flv
However, you did not supply any information about your input, so these examples may not work for you. To reduce the file size you will need to re-encode. The link I provided shows how to do that. Pay special attention to the Constant Rate Factor section.