MediaConvert split audio into multiple output chunks - aws-media-convert

I would like to create three audio chunks / segments from a 30-minute-long audio file using AWS MediaConvert. Is it possible to do this with MediaConvert and if so how?
Here is an example using ffmpeg
ffmpeg -i 30min_audio.wav -c copy -f segment -segment_times 0,600,1200 output%d.wav

Unfortunately, there isn't a way to do this in the service currently. The service does not support input clipping for audio only inputs [1].
A creative way (if you don't care about the output format as much) would be to use a adapative bitrate (ABR) packaging type, such as Apple HLS, set the segment length. Note this will give you consistent segmenting (except for the last segment).
One thing to point out is that in your ffmpeg command you are using the -c copy filter. This will perform a transmux. MediaConvert is a transcoding service, so an encode will be performed if you process jobs in the service.
I like to think of it as some bits in, new bits out.
Resources:
[1] https://docs.aws.amazon.com/mediaconvert/latest/ug/feature-limitations-for-audio-only.html

Related

How do I encode a video stream to multiple output formats in parallel with ffmpeg?

I would like to use one FFmpeg process to receive video input and then pass that video to multiple separate encoder processes in order to efficiently make use of all available CPU cores.
The FFmpeg wiki article on Creating multiple outputs has this note from #rogerdpack:
Outputting and re encoding multiple times in the same FFmpeg process will typically slow down to the "slowest encoder" in your list. Some encoders (like libx264) perform their encoding "threaded and in the background" so they will effectively allow for parallel encodings, however audio encoding may be serial and become the bottleneck, etc. It seems that if you do have any encodings that are serial, it will be treated as "real serial" by FFmpeg and thus your FFmpeg may not use all available cores. One work around to this is to use multiple ffmpeg instances running in parallel, or possible piping from one ffmpeg to another to "do the second encoding" etc. Or if you can avoid the limiting encoder (ex: using a different faster one [ex: raw format] or just doing a raw stream copy) that might help.
The article has an example of using a tee pseudo-muxer, but it uses "a single instance of FFmpeg. The example of piping from one instance of FFmpeg to another only allows one encoder process.
A 10-year-old version of the same article mentions using the tee process but it was subsequently deleted:
Another option is to output from FFmpeg to "-" then to pipe that to a "tee" command, which can send it to multiple other processes, for instance 2 different other ffmpeg processes for encoding (this may save time, as if you do different encodings, and do the encoding in 2 different simultaneous processes, it might do encoding more in parallel than elsewise). Un benchmarked, however.
Along the same lines: Some of the example commands use the mpegts to encapsulate frames before passing them between processes. Is there any constraint that this applies to the codecs or types of metadata that can be sent to downstream processes?

ffmpeg timing individual frames of an image sequence

I am having an image sequence input of webp-s concatenated (for various reasons) in a single file. I have a full control over the single file format and can potentially reformat it as a container (IVF etc.) if a proper exists.
I would like ffmpeg to consume this input and time properly each individual frame (consider first displayed for 5 seconds, next 3 seconds, 7, 12 etc.) and output a video (mp4).
My current approach is using image2pipe or webp_pipe followed by a list of loop filters, but I am curious if there are any solid alternatives potentially a simple format/container I could use in order to reduce or completely avoid ffmpeg filter instructions as there might be hundreds or more in total.
ffmpeg -filter_complex "...movie=input.webps:f=webp_pipe,loop=10:1:20,loop=10:1:10..." -y out.mp4
I am aware of concat demuxer but having a separate file for each input image is not an option in my case.
I have tried IVF format which works ok for vp8 frames, but doesnt seem to accept webp. An alternative would be welcomed, but way too many exists for me to study each single one and help would be appreciated.

How to split a video up into <2.5GB parts with FFmpeg

I am trying to achieve a way to send large video files through Firefox Send.
Because Firefox Send has a 2.5 GB limit per file that one sends, I need to break up a video file into parts that are each less than 2.5GB.
Is there a relatively simple way to reliably split a video based on data limits using FFmpeg, rather than using duration? (Using duration would be unreliable, because different equal length portions of a video can be different sized)
EDIT 1: I apoligize for the lack of clarity, I was planning on using a Bash script using FFmpeg and ffsend. I was wondering if there is any way to do this through video processing rather than zip compression.
The standard utility split is intended for precisely this sort of thing.
# sender does:
split -b 2500m file.mpg file.mpg__split_
# recipient downloads all the pieces and does:
cat file.mpg__split_* > file.mpg
A disadvantage of this procedure is that the individual parts are not usable.
An advantage is that the final output is identical to the original.

Convert m3u8 (HLS) to mpd (MPEG-DASH)

I have Live stream of HLS [https://82-80-192-30.vidnt.com/ipbc_IPBCchannel11LVMRepeat/definst/IPBCchannel11LVM_3.stream/playlist.m3u8] and I want to convert it to MPEG-DASH.
What is the best practice?
The stream is already h264 aac therefore I understand I do not need to reencode and I just need to transmux.
What should I use?
ffmpeg? mp4box?
Notes:
I used nginx-rtmp-module (https://github.com/ut0mt8/nginx-rtmp-module/) in order to create DASH from RTMP stream according to this tutorial: https://isrv.pw/html5-live-streaming-with-mpeg-dash
But nginx-rtmp-module can get as input just rtmp streams and it did not work for me with HLS stream.
I used ffmpeg in order to create dash from m3u8 as following:
ffmpeg -i https://82-80-192-30.vidnt.com/ipbc_IPBCchannel11LVMRepeat/_definst_/IPBCchannel11LVM_3.stream/playlist.m3u8 -strict -2 -min_seg_duration 2000 -window_size 5 -extra_window_size 5 -use_template 1 -use_timeline 1 -f dash out.mpd
But this is very limited. I can't control the segment duration.
The min_seg_duration parameter of ffmpeg does not work very well for me, and also it can set the minimum duration while I want to limit the maximum duration of each segment (the segment comes out with ~10 seconds, while I need it to be ~2-4 seconds as I'm playing live).
Firstly it is worth saying that if you can avoid doing this you will be saving yourself a whole lot of work!
Most devices and clients these days can play both HLS and DASH streams, so the usual approach is to add any extra functionality needed in your app or client.
If you do have to convert server side, then its worth being aware that while HLS streams typically used TS segments in the past, recently support for fragmented MP4 has become available within the HLS ecosystem.
If you have TS video streams then you will need to do a conversion along the lines you outline above with ffmpeg.
If you have fragmented MP4 then you should actually have the correct format already and may find you just have to create the manifest file so DASH can access the fragmented mp4 streams.
All the above assumes that your content is not encrypted or that you don't have to support encryption - if it is then you may not be able to convert the media, or you may have to also encrypt the media differently for some streams than others, as currently most deployed windows and chrome devices and browsers use a slightly different encryption approach (a different AES mode) than Apple devices.

FFMpeg video clipping

I would like to use the ffmpeg apis (not the command line) for clipping videos to a specific size (e.g say 1hr video, create a new video starting at 10 minutes and ending at 30 minutes). Are there any examples of doing this out there?
I have used the apis to stream and record video so I have a bit of background knowledge.
Thanks.
ffmpeg (the command line tool) is just a frontend to the APIs with some extras. The whole source of the ffmpeg CLI tool is contained in one single source file ffmpeg.c. I suggest you just take a look into that to see, how ffmpeg does it internally.

Resources