How to split a video up into <2.5GB parts with FFmpeg - bash

I am trying to achieve a way to send large video files through Firefox Send.
Because Firefox Send has a 2.5 GB limit per file that one sends, I need to break up a video file into parts that are each less than 2.5GB.
Is there a relatively simple way to reliably split a video based on data limits using FFmpeg, rather than using duration? (Using duration would be unreliable, because different equal length portions of a video can be different sized)
EDIT 1: I apoligize for the lack of clarity, I was planning on using a Bash script using FFmpeg and ffsend. I was wondering if there is any way to do this through video processing rather than zip compression.

The standard utility split is intended for precisely this sort of thing.
# sender does:
split -b 2500m file.mpg file.mpg__split_
# recipient downloads all the pieces and does:
cat file.mpg__split_* > file.mpg
A disadvantage of this procedure is that the individual parts are not usable.
An advantage is that the final output is identical to the original.

Related

How do I encode a video stream to multiple output formats in parallel with ffmpeg?

I would like to use one FFmpeg process to receive video input and then pass that video to multiple separate encoder processes in order to efficiently make use of all available CPU cores.
The FFmpeg wiki article on Creating multiple outputs has this note from #rogerdpack:
Outputting and re encoding multiple times in the same FFmpeg process will typically slow down to the "slowest encoder" in your list. Some encoders (like libx264) perform their encoding "threaded and in the background" so they will effectively allow for parallel encodings, however audio encoding may be serial and become the bottleneck, etc. It seems that if you do have any encodings that are serial, it will be treated as "real serial" by FFmpeg and thus your FFmpeg may not use all available cores. One work around to this is to use multiple ffmpeg instances running in parallel, or possible piping from one ffmpeg to another to "do the second encoding" etc. Or if you can avoid the limiting encoder (ex: using a different faster one [ex: raw format] or just doing a raw stream copy) that might help.
The article has an example of using a tee pseudo-muxer, but it uses "a single instance of FFmpeg. The example of piping from one instance of FFmpeg to another only allows one encoder process.
A 10-year-old version of the same article mentions using the tee process but it was subsequently deleted:
Another option is to output from FFmpeg to "-" then to pipe that to a "tee" command, which can send it to multiple other processes, for instance 2 different other ffmpeg processes for encoding (this may save time, as if you do different encodings, and do the encoding in 2 different simultaneous processes, it might do encoding more in parallel than elsewise). Un benchmarked, however.
Along the same lines: Some of the example commands use the mpegts to encapsulate frames before passing them between processes. Is there any constraint that this applies to the codecs or types of metadata that can be sent to downstream processes?

MediaConvert split audio into multiple output chunks

I would like to create three audio chunks / segments from a 30-minute-long audio file using AWS MediaConvert. Is it possible to do this with MediaConvert and if so how?
Here is an example using ffmpeg
ffmpeg -i 30min_audio.wav -c copy -f segment -segment_times 0,600,1200 output%d.wav
Unfortunately, there isn't a way to do this in the service currently. The service does not support input clipping for audio only inputs [1].
A creative way (if you don't care about the output format as much) would be to use a adapative bitrate (ABR) packaging type, such as Apple HLS, set the segment length. Note this will give you consistent segmenting (except for the last segment).
One thing to point out is that in your ffmpeg command you are using the -c copy filter. This will perform a transmux. MediaConvert is a transcoding service, so an encode will be performed if you process jobs in the service.
I like to think of it as some bits in, new bits out.
Resources:
[1] https://docs.aws.amazon.com/mediaconvert/latest/ug/feature-limitations-for-audio-only.html

ffmpeg read the current segmentation file

I'm developing a system using ffmpeg to store some ip camera videos.
i'm using the segmentation command for store each 5 minutes a video for camera.
I have a wpf view where i can search historycal videos by dates. In this case i use the ffmpeg command concat to generate a video with the desire duration.
All this work excelent, my question is: it's possible concatenate the current file of the segmentation? i need for example, make a serch from the X date to the current time, but the last file is not generated yet by the ffmpeg. when i concatenate the files, the last one is not showing because is not finish the segment.
I hope someone can give me some guidance on what I can do.
Some video formats can always be playable during the build process. That is, you can make a copy of the unfinished segmentation directly and use it to merge.
I suggest you use flv or ts format to do this. mp4 is not supported. Also note that there is a delay from encoding to actually writing to the disk.
I'm not sure if direct copy will cause some data problems at the end of the segmentation file, but ffmpeg will ignore this part of the data during the merge process, so the merged video should be fine.

Capture Video from Public Web Video Feed

I've unsuccessfully mucked around with this on my own and need help.
Given the public Web camera feed at https://itsvideo.arlingtonva.us:8011/live/cam58.stream/playlist.m3u8 I'd like to be able to be able to capture the video feed into an MP4 or MPG file with a reasonably accurate timestamp using the Windows command line (so I can put it into a batch script, etc.).
This is probably easy for someone who is already a wiz with VLC or FFmpeg or some such tool.
Additional wish list items would be to call up a higher resolution stream for a shorter duration (so as to balance I/O impact) and/or to just get still images instead of the video offered.
For instance, the m3u file has the following parameters:
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-STREAM-INF:BANDWIDTH=214105,CODECS="avc1.100.40",RESOLUTION=352x288
chunklist_w977413411.m3u8
Would there be a way to substitute any of these to increase the resolution and reduce the video duration in a corresponding way so that net I/O is the same? Or even to just get a still image, whether higher res or not?

Understanding ffmpeg re parameter

I was reading about the -re option in ffmpeg .
What they have mentioned is
From the docs
-re (input)
Read input at the native frame rate. Mainly used to simulate a grab device, or live input stream (e.g. when reading from a file). Should not be used with actual grab devices or live input streams (where it can cause packet loss). By default ffmpeg attempts to read the input(s) as fast as possible. This option will slow down the reading of the input(s) to the native frame rate of the input(s). It is useful for real-time output (e.g. live streaming).
My doubt is basically the part of the above description that I highlighted. It is suggested to not use the option during live input streams but in the end, it is suggested to use it in real-time output.
Considering a situation where both the input and output are in rtmp format, should I use it or not?
Don't use it. It's useful for real-time output when ffmpeg is able to process a source at a speed faster than real-time. In that scenario, ffmpeg may send output at that faster rate and the receiver may not be able to or want to buffer and queue its input.
It (-re) is suitable for streaming from offline files and reads them with its native speed (i.e. 25 fps); otherwise, FFmpeg may output hundreds of frames per second and this may cause problems.

Resources