Ffmpeg Showfreqs with Custom Frequency Ranges, Is It Possible? - ffmpeg

I need to display showfreqs with 32 or 64 bands, but with custom frequency ranges, just like an old equalizer indicator.
In this case, win_size is not what I want because the frequency distribution doesn't look balanced where the treble section is too wide and the bass section is to narrow.
Is it possible? If so then how to do it?
I've looked at showfreqs documentation but I didn't find any clue about this.
Thank you.

Related

How can I calculate the frequency of pixel change within a region of interest in an AVI file?

I have a number of AVI files for which I can identify certain regions of interest.
I am interested in calculating the frequency of pixel change (e.g. from black to white, or some other change) within each region of interest. Is there any tool or workflow I should be looking at?
I think you would have to do the following:
Extract each frame of the AVI into a bitmap (you could use something like FFmpeg)
Then read each pixel of the region into some type of array
Then compare each array to the first or previous depending on your needs to find differences, and then calculate the frequency of change.
OpenCV is a library that you can use with a lot of algorithms already developed for such tasks. It works with python and also C++ among other lenguages.

How would I create a radially offset mosaic of rtsp streams that transitions to a logo

I'm new to stack overflow, but I've been researching how to do this for a couple weeks to no avail. I'm hoping perhaps one of you has some knowledge I haven't seen online yet.
Here is a crude illustration of what I hope to accomplish. I have a video wall of eight monitors - four each of two different sizes. The way it's set up now, all eight monitors are treated together as one big monitor displaying an oddly shaped cutout of a desktop.
Eventually I need each individual monitor to display a separate RTSP stream for about thirty seconds, then have the entire display - all eight monitors in conjunction - to fade out into a large logo.
My problem right now is that I don't know of a way to mask an rtsp stream so it looks like this rather than this, let alone how to arrange them into a weirdly spaced, oddly angled, multiple aspect-ratio mosaic like in the original illustration.
Thank you all for your time. I'm just an intern here without insane technical knowhow, but I'll try to clarify as much as I can.
-J
I believe -filter-complex is one of the ffmpeg CLI flags that you need. You can find many examples online, but here are a few links of interest:
Here's an ffmpeg wiki on creating a mosaic https://trac.ffmpeg.org/wiki/Create%20a%20mosaic%20out%20of%20several%20input%20videos
FFMpeg - Combine multiple filter_complex and overlay functions
That should get you started, but you will probably need to add customization depending on frame size and formats.

Detect frames that have a given image/logo with FFmpeg

I'm trying to split a video by detecting the presence of a marker (an image) in the frames. I've gone over the documentation and I see removelogo but not detectlogo.
Does anyone know how this could be achieved? I know what the logo is and the region it will be on.
I'm thinking I can extract all frames to png's and then analyse them one by one (or n by n) but it might be a lengthy process...
Any pointers?
ffmpeg doesn't have any such ability natively. The delogo filter simply works by taking a rectangular region in its parameters and interpolating that region based on its surroundings. It doesn't care what the region contained previously; it'll fill in the region regardless of what it previously contained.
If you need to detect the presence of a logo, that's a totally different task. You'll need to create it yourself; if you're serious about this, I'd recommend that you start familiarizing yourself with the ffmpeg filter API and get ready to get your hands dirty. If the logo has a distinctive color, that might be a good way to detect it.
Since what you're after is probably going to just be outputting information on which frames contain (or don't contain) the logo, one filter to look at as a model will be the blackframe filter (which searches for all-black frames).
You can write a detect-logo module, Decode the video(YUV 420P FORMAT), feed the raw frame to this module, Do a SAD(Sum of Absolute Difference) on the region where you expect a logo,if SAD is negligible its a match, record the frame number. You can split the videos at these frames.
SAD is done only on Y(luma) frames. To save processing you can scale the video to a lower resolution before decoding it.
I have successfully detect logo using a rpi and coral ai accelerator in conjunction with ffmeg to to extract the jpegs. Crop the image to just the logo then apply to your trained model. Even then you will need to sample a minute or so of video to determine the actual logos identity.

Forcing custom H.264 intra-frames (keyframes) at encode-time?

I have a video sequence that I'd like to skip to specific frames at playback-time (my player is implemented using AVPlayer in iOS, but that's incidental). Since these frames will fall at unpredictable intervals, I can't use the standard "keyframe every N frames/seconds" functionality present in most video encoders. I do, however, know the target frames in advance.
In order to do this skipping as efficiently as possible, I need to force the target frames to be i-frames at encode time. Ideally in some kind of GUI which would let me scrub to a frame, mark it as a keyframe, and then (re)encode my video.
If such a tool isn't available, I have the feeling this could probably be done by rolling a custom encoder with libavcodec, but I'd rather use a higher-level (and preferably scriptable) tool to do the job if a GUI isn't possible. Is this the kind of task ffmpeg or mencoder can be bent to?
Does anybody have a technique for doing this? Also, it's entirely possible that this is an impossible task because of some fundamental ignorance I have of the h.264 codec. If so, please do put me right.
ffmpeg has a -force_key_frames option that accepts a series of arbitrary timestamps as well as other ways to specify the frames. From the documentation:
-force_key_frames 0:05:00,...
Answered my own question: it's possible to set custom compression keyframes in Apple Compressor.
Compression markers are also known as manual compression markers. These are markers you can add to a Final Cut Pro sequence (or in the Compressor Preview window) to indicate when Compressor should generate an MPEG I-frame during compression.
Source.
Could you not use chapter markers to jump between sections? Not an ideal solution but a lot easier to achieve.
You can use this software:
http://www.applesolutions.com/bantha/MH.html

how best can I do a global quantitative analysis of an videos frame color

I am trying to analyse an mp4 video and convert it's video content into some sort of array of numbers. where the numbers would represent the "global" color of the frame.
The frames are essentially grayscale frames.
Eventually would like to analyse the frequency of a the lightest global color in the image.
Can anyone suggest the best (simplest) solution?
Thanks very much for looking and any suggestions.
Have you tried looking into [OpenCV1, the computer vision library?
It has support for many video formats such as mp4.
The histogram functionality sounds very close to what you want.
I'm not sure what you mean by lightest color (does that mean lowest saturation in HSV color space?) etc. There's plenty of color space conversion and other functionality that you can explore in OpenCV.

Resources