Mixing video streams with a fade transition effect myself - ffmpeg

I would like to make a a software setup that would let the user switch between two webcam feeds (or more but let's consider two). I would like a fade in/out effect while switching. My main problem is that I don't know how to start.
There seems to be some ffmpeg tool that could do the job (like libavfilter) but I don't get how I could setup it.
What should I consider to use if I want to access frame image to make the transition myself? Is there some simpler way?
I know that this question looks really poorly written, but I'm as lost as I look.

Related

JavaFX: Best way to include somewhat complex animations into GUI?

I am working on a desktop application with JavaFX. As seen in the image below, I have an open space for a sort of mascot character, which I hope to give simple animations to. They are simple enough that it would be possible--but incredibly hard--to make in JavaFX, but complex enough that I feel like I should use a different method. This is my first JavaFX program, so my experience level is pretty low. I am able and willing to make these animations in other software and create a gif or mp4, etc if necessary, but I'm just in the dark here. I couldn't find much documentation. Below, you can see the space I would like to fill with an animated mascot. What method would be the best for this level of animation?

How can I programmatically interact with a video game GUI

Before I get shot down on this one, I realize that the 'how' answer for this question might be slightly debatable, however I'm more interested in the 'what'.
In a nut shell I want to know which methods I can use to interact with a PC video game interface. I want to create a program that can extract data from a video game market interface.
My first initial thought was that I would need to programmatically take screen shots and then use some Optical Character Recognition software to extract the text. Then run whatever operation on the extracted text to derive my incites.
Then I was thinking it might just be easier to have a bunch of mini screen shots that I just use to find matches on certain sections of the screen. When a match is found, I would then know what the text is on the screen, without having to actually 'extract' it.
For those out there whom have done this, can you point me in one direction or the other? Perhaps there is a method that I am completely unaware of.
If its the case that this question is not suitable for this forum. It would be much appreciated if you could direct me elsewhere.
Edit: I should probably add that I'm not looking to spend a fortune on this project... so any free software would be the best. Perhaps that's a tall order.
I'm starting to think Sikuli is the direction I'm going to go. Open Source image recognition software, integrates with Python, Ruby, Java, JDBC, JavaScript and more.
-- Expanding on the question --
There are basically 3 categories of tools:
Recorder while you manually work along your workflow, a recorder tracks your mouse and keyboard actions. After stopping the recording, you might playback (autorun your worflow). The recordings can usually be edited and augmented with additional features.
GUI aware the tool allows to programmatically operate on GUI elements like buttons. This is based on the knowledge of internal structures and names of the GUI elements and their features. Some of these tools also have a recording feature.
Visually the tool “sees” images (usually retangular pixel areas) on the screen and allows to act on these images using mouse and keyboard simulation. There might be some recorder feture as well with such a tool.
SikuliX belongs to the 3rd category and currently does not have a recorder feature.
Answer in progress...
In games with moddable UIs, like many MMOs, you could create a mod that streams data through a series of black and white squares that could be read with optical sensors. From there, a microcontroller could deliver the data back to the PC via USB or wifi.
My approach as a noob. First determine if OCR 100% needed, I think this plays a role in speed.
if possible:
-run game in window (allows for trouble shooting and easy troubleshooting)
-is there a high contrast option for game? Will help Sikuli find things
then you plan out your scenarios:
You have to create different functions for different situations. A lot of gaming is "do you see this?" Then "do this" until that is gone.
Start with small parts you want to automate then build on them. Making sure your parts can scale in case small change need to happen, they will. For instance you want to open the menu if you see an object, lets say a tree.
Assume you have some sort of walking algorithm.
setROI(region1) #focus here for tree
if exists(tRee):
click(loCation) #you could hit the shortcut key to opening the menu
click(iTem) #if the item moves in the menu then you may need to scroll to find it first or you can change the ROI and start seeing if sikuli can differentiate your item from one you dont want to click.
You would get that to loop into other actions and proceed. Goodluck.

Custom SLider for video on iPad

I have a custom UISlider and use the currentPlaybackTime to change values of an MPMoviePlayerController object.
The problem is when i scrub at a fast rate using the slider, it doesn't respond as fast as i would like..
Is there any better way to have a fast interactive scrubber for ipad? targeting from OS 3.2
Well there are two issues, only one you can control directly.
Multimedia-content is commonly compressed using some kind of delta-compression, hence quick and exact seeking is not a trivial task to cope with. As that is common and since you can not directly change that, you will have to live with it.
the only way to increase responsiveness for seeking on the content-side (when encoding) is reducing the gop-size - that is, less p-frames between the i-frames.
when using a slider or a similar control, you could, instead of directly connecting the current playback position with it, handle any manual changes in an indirect fashion. You could run a timer based job that, whenever the slider/scrubber has been moved, tries to adjust the playback position towards that new value. Once the player is seeking, prevent the scrubber from getting feedback from the current playback location but allow it once the player is in playing state again. That way the user does not directly experience the clunky seek feedback.

what language/libraries an app that has a video preview window?

I want to make a simple assistant for putting together AviSynth scripts. This would be a windows desktop application that would have a "preview" screen of an avi movie, which would give you a timeline, play, fast-forward, rewind, advance and go back frame-by-frame. The program would need to know the frame number of the current frame in the player and its filename.
What language is best suited for this? I know PHP ( I understand that this is not a contender ) and am familiar with Java. My thought is that the biggest hurdle with this project will be finding a library for the video playing features. With a cursory glance, no Java video libraries jumped out at me. My next thought would be c++ for this.
The output of this program would be an AviSynth script, a plaintext file which looks like this:
AviSource("myAvi.avi")
Crop(0, 0, 320, 240)
Blur(0.1)
There are a few tool kits that can do tihs:
C#: DirectShow (DirectX)
Java: JMF
If you have Avisynth installed, the only thing you need for preview (If I understood, that's your need) is something that can decode uncompressed video. It would open like a normal file. I'm sure there are video players implemented fairly well in Java, but I don't know how much functionallity from them you need. Anyway parsing scripts is not easy - I recommend you not to try to if you don't need to.
EDIT: I'm sorry, I thought you needed a very specific app, but from what you seem to need, you don't need to code anything, use AVSP!
Please watch this video, it shows how straightforward it is. It has advanced functions such as auto-completion, (even from your own auto-loading scripts!) syntax coloring, macros, automtic importing, drag&drop (of a video, for instance - just drag it to the video and AVSP makes the loading) scrit preview with zoom and all stuff, you can use automatic or custom sliders (you can make a slider that re-writes a number on the script in real time, for instance for hue/luminosity/contrast/etc. that would be cumbersome to control via script), checkboxes & radio buttons (for boolean values, etc...), text fields that alter strings in real time, and basically anything you need... Please check it out.
Also, VirtualDubMod is OLD.
And yep, AVSP is free, both gratis and libre! =)

Qt Animation: Appearing & Disappearing Objects

I'm writing a video annotation application with Qt4 in which users need to be able to seek to various points in a video, putting markers on various objects and then setting keypoints for those markers so that they stay on the objects in the video as they move around. QGraphicsItemAnimation seems like a great place to start for these markers, however they need to be able to appear and disappear at specific times, which I can't figure out how to do with the QGraphicsItemAnimation. I could set the scale at 0 to make the objects disappear, but that seems like a pretty hacky solution, and I'm guessing that the paint engine would still waste cpu cycles trying to draw those invisible objects. Does anyone have a better solution than this? I'm using Qt 4.5.3 right now, but I'm willing to upgrade to 4.6 if it makes things easier. Thanks!!
It seems like the functionality you want of showing/hiding QGraphicsItem objects is beyond the scope of the simple "tweening" that the animation class performs. It is only for one object at a time, and any appearance or disappearance you have to write yourself.
You still might get some mileage out of QGraphicsItemAnimation (although the fact that it uses its own timer instead of being locked to the frame clock of your video is a little dodgy).
Neglecting "seeking" for a moment, there is a QTimeLine::finished() signal. If you let the end of an annotation's active animation timeline represent the point where you want it to disappear, you can trigger QGraphicsItem::hide() at that point. When it comes time to turn it back on, you would construct a new QGraphicsItemAnimation (based on the next run of keyframe data for that object) and call QGraphicsItem::show().
Note that one of the headlining features of Qt 4.6 is the QtAnimation framework, which is more sophisticated but also rather complex. I've not used it yet, but looking over the examples it seems like you might be able to "animate" a visibility or opacity property.

Resources