How would I create a radially offset mosaic of rtsp streams that transitions to a logo - ffmpeg

I'm new to stack overflow, but I've been researching how to do this for a couple weeks to no avail. I'm hoping perhaps one of you has some knowledge I haven't seen online yet.
Here is a crude illustration of what I hope to accomplish. I have a video wall of eight monitors - four each of two different sizes. The way it's set up now, all eight monitors are treated together as one big monitor displaying an oddly shaped cutout of a desktop.
Eventually I need each individual monitor to display a separate RTSP stream for about thirty seconds, then have the entire display - all eight monitors in conjunction - to fade out into a large logo.
My problem right now is that I don't know of a way to mask an rtsp stream so it looks like this rather than this, let alone how to arrange them into a weirdly spaced, oddly angled, multiple aspect-ratio mosaic like in the original illustration.
Thank you all for your time. I'm just an intern here without insane technical knowhow, but I'll try to clarify as much as I can.
-J

I believe -filter-complex is one of the ffmpeg CLI flags that you need. You can find many examples online, but here are a few links of interest:
Here's an ffmpeg wiki on creating a mosaic https://trac.ffmpeg.org/wiki/Create%20a%20mosaic%20out%20of%20several%20input%20videos
FFMpeg - Combine multiple filter_complex and overlay functions
That should get you started, but you will probably need to add customization depending on frame size and formats.

Related

how do I apply the stereo effect to a video mapped sphere?`

Assuming all my assets are linked properly. Using the oculusRiftEffect.js or the StereoEffect.js - how can I make my html file cardboard compatible? My Gist link is below
https://gist.github.com/4d9e4c81a6b13874ed52.git
Please advise
Exactly what are you trying to do? "Stereo effect" isn't very descriptive.
However assuming that you have stereo video files (one for left eye, one for right eye) you'd just play them back in a left sphere (seen by the left eye), and a right sphere for the right eye. The spheres are offset by the interpupillary distance (IPD) - usually about 55mm. (it'd actually be whatever the stereo videos are offset by).
So, you might ask - what happens when I turn around? The IPD goes negative. When I look up or down? It goes to zero. Welcome to stereo video.
Note that there are ways around this, but you're not going to get them with a GoPro without a lot of special processing. At best you can sync the IDP direction with the averaged lens separation for the video streams, but you'll always get stitching errors. The equirectangular format (i.e square video) isn't the best, you'll always get the wrong answer at the poles. Not using a sphere is probably how it will evolve (but it's the simplest solution till we get a better one)
But, given all that, give it a shot, the brain is very forgiving - and it'll look 3D-ish most of the time. This stuff is being refined all the time.

Detect frames that have a given image/logo with FFmpeg

I'm trying to split a video by detecting the presence of a marker (an image) in the frames. I've gone over the documentation and I see removelogo but not detectlogo.
Does anyone know how this could be achieved? I know what the logo is and the region it will be on.
I'm thinking I can extract all frames to png's and then analyse them one by one (or n by n) but it might be a lengthy process...
Any pointers?
ffmpeg doesn't have any such ability natively. The delogo filter simply works by taking a rectangular region in its parameters and interpolating that region based on its surroundings. It doesn't care what the region contained previously; it'll fill in the region regardless of what it previously contained.
If you need to detect the presence of a logo, that's a totally different task. You'll need to create it yourself; if you're serious about this, I'd recommend that you start familiarizing yourself with the ffmpeg filter API and get ready to get your hands dirty. If the logo has a distinctive color, that might be a good way to detect it.
Since what you're after is probably going to just be outputting information on which frames contain (or don't contain) the logo, one filter to look at as a model will be the blackframe filter (which searches for all-black frames).
You can write a detect-logo module, Decode the video(YUV 420P FORMAT), feed the raw frame to this module, Do a SAD(Sum of Absolute Difference) on the region where you expect a logo,if SAD is negligible its a match, record the frame number. You can split the videos at these frames.
SAD is done only on Y(luma) frames. To save processing you can scale the video to a lower resolution before decoding it.
I have successfully detect logo using a rpi and coral ai accelerator in conjunction with ffmeg to to extract the jpegs. Crop the image to just the logo then apply to your trained model. Even then you will need to sample a minute or so of video to determine the actual logos identity.

Still images to video for storage - But back to still images for viewing

Using ffmpeg I can take a number of still images and turn them into a video. I would like to do this to decrease the total size of all my timelapse photos. But I would also like to extract the still images for use at a later date.
In order to use this method:
- I will need to correlate the original still image against a frame number in the video.
- And I will need to extract a thumbnail of a given frame number in a
video.
But before I go down this rabbit hole, I want to know if the requirements are possible using ffmpeg, and if so any hints on how to accomplish the task.
note: The still images are timelapse from a single camera over a day, so temporal compression will be measurable compared to a stack of jpegs.
When you use ffmpeg to create a video from a sequence of images, the images aren't affected in any way. You should still be able to use them for what you're trying to do, unless I'm misunderstanding your question.
Edit: You can use ffmpeg to create images from an existing video. I'm not sure how well it will work for your purposes, but the images are pretty high quality, if not the same as the originals. You'd have to play around with it to make sure the extracted images are exactly the same as the input images as far as sequential order and naming, but if you take fps into account, it should work.
The command to do this (from the ffmpeg documentation) is as follows:
ffmpeg -i movie.mpg movie%d.jpg

Real-time video(image) stitching

I'm thinking of stitching images from 2 or more(currently maybe 3 or 4) cameras in real-time using OpenCV 2.3.1 on Visual Studio 2008.
However, I'm curious about how it is done.
Recently I've studied some techniques of feature-based image stitching method.
Most of them requires at least the following step:
1.Feature detection
2.Feature matching
3.Finding Homography
4.Transformation of target images to reference images
...etc
Now most of the techniques I've read only deal with images "ONCE", while I would like it to deal with a series of images captured from a few cameras and I want it to be "REAL-TIME".
So far it may still sound confusing. I'm describing the detail:
Put 3 cameras at different angles and positions, while each of them must have overlapping areas with its adjacent one so as to build a REAL-TIME video stitching.
What I would like to do is similiar to the content in the following link, where ASIFT is used.
http://www.youtube.com/watch?v=a5OK6bwke3I
I tried to consult the owner of that video but I got no reply from him:(.
Can I use image-stitching methods to deal with video stitching?
Video itself is composed of a series of images so I wonder if this is possible.
However, detecting feature points seems to be very time-consuming whatever feature detector(SURF, SIFT, ASIFT...etc) you use. This makes me doubt the possibility of doing Real-time video stitching.
I have worked on a real-time video stitching system and it is a difficult problem. I can't disclose the full solution we used due to an NDA, but I implemented something similar to the one described in this paper. The biggest problem is coping with objects at different depths (simple homographies are not sufficient); depth disparities must be determined and the video frames appropriately warped so that common features are aligned. This essentially is a stereo vision problem. The images must first be rectified so that common features appear on the same scan line.
You might also be interested in my project from a few years back. It's a program which lets you experiment with different stitching parameters and watch the results in real-time.
Project page - https://github.com/lukeyeager/StitcHD
Demo video - https://youtu.be/mMcrOpVx9aY?t=3m38s

Image processing - one frame is washed out, other isn't. How to 'fix'?

I have the following 2 consequative, unaltered frames from a video:
For some reason the camera made the 2nd much 'washed out' than the first. I want to make the 2nd look more like the 1st.
In the video I'm trying to process, there are lots of cases like this, where the 'exposure' changes suddenly from one frame to the next. I am able to find this parts of the video by looking at the image histogram for each frame, and when 2 adjacent frames have histograms that are too far apart, it's where this has happened. I can find this sections of different exposure, but I'm stumped with how to fix it.
As a programmer I'm familar with ImageMagick and that's about it. Since I have lots of frames, some automated hand off approach to fix this is by far the best solution. I am also totally unskilled with graphics editing programmes.
I've tried changing the exposure in imagemagick (with -level 0,50% etc.), but that doesn't help.
What ImageMagick commands (or other FLOSS image editing tools) will make the 2nd image look more like the 1st?
As some people have pointed out in the comments, the problem is the colour balance. Changing the colour balance makes the 2 images more similar.

Resources