Is there a open-source video codec which can "play" .exe files? - windows

Well, I would like to utilize Windows Media Player to run .exe applications in it's video-window. The application would be for example a full screen DirectX or OpenGL application, which you can execute on the OS.
I would like to know if there is such a codec so I can tweak it for my needs? Or maybe there is one which has very good tweaking abilities but is not (fully) open-source?
(I am asking this question because of this question: https://superuser.com/questions/533730/how-to-run-an-directx-or-opengl-application-as-desktop-background)

This is probably the weirdest request I've read in a long time. First the bad news: No, there's no open-source codec to play the output generated by a ".exe" in the video window of Windows Media Player. ".exe"s or more accurately PE files (Portable Executables) contain program code, i.e. data that is interpreted as program by your CPU. Videos however are not programs, but image data.
A video codec is a program that translates video data between formats. For example it can decode compressed h.264 into raw RGB data suitable for displaying. There are certain constraints on video codecs, for example that they decode a sequence of frames.
Now the good news: Technically it is possible to write such a codec. I won't be possible to open a .exe with WMP, though, as those don't can't be interpreted by WMP. But you could introduce a new FOURCC, a 4 character code identifying a particular video encoding format, and register a special purpose codes with that FOURCC. Then you create a special AVI file using that FOURCC and containing a reference to your target .exe instead of video data in the frames. When WMP tries to play this file it will launch this "codec", which in turn can launch the .exe. You need to establish a communication protocol between the launched application and the "codec". An off-screen rendering surface must be created, and I'd say a PBuffer DC shared between the processes serves this best, i.e. the "codec" creates the PBuffer and the .exe creates a OpenGL context on top of it. Then the codec passes the contents of the PBuffer as the decoded video frames to WMP.
So yes, such a hack can be done. But it'd be ugly and weird.
Why not simply write a visualization plugin for WMP? Those run in the video window as well, but it doesn't require such an ugly hack.

Simple answer: NO.
Complex answer: your title makes zero sense, because down there you then do not talk of playing an exe file but of trying to intercept "all sorts of API's" and magically transform them into a video.

Related

How to add a Poster Frame to an MP4 video by timecode?

The mvhd atom or box of the original Quicktime MOV format supports a poster time variable for a timecode to use as a poster frame that can be used in preview scenarios as a thumbnail image or cover picture. As far as I can tell, the ISOBMFF-based MP4 format (.m4v) has inherited this feature, but I cannot find a way to set it using FFmpeg or MP4box or similar cross-platform CLI software. Edit: Actually, neither ISOBMFF nor MP4 imports this feature from MOV. Is there any other way to achieve this, e.g. using something like HEIFʼs derived images with a thmb (see Amendment 2) role?
The original Apple Quicktime (Pro) editor did have a menu option for doing just that. (Apple Compressor and Photos could do it, too).
To be clear, I do not want to attach a separate image file, which could possibly be a screenshot grabbed from a movie still, as a separate track to the multimedia container. I know how to do that:
Stackoverflow #54717175
Superuser #597945
I also know that some people used to copy the designated poster frame from its original position to the very first frame, but many automatically generated previews use a later time index, e.g. from 10 seconds, 30 seconds, 10% or 50% into the video stream.

Streaming video playlist from collection of identical mp4 files

I am looking for a way to play/stream to browser tag a list of mp4 files (same size, bitrate, etc) without hickups in between the files. I am hoping the following approach would work:
* convert mp4 files to m4s/m4v files
* generate MPEG-Dash MPD file (xml)
* stream MPD to dash player in browser
Is this in any way possible? I am aware the m4s/m4v files need special headers and an entry file must be made somehow, and there you have my roadblock.
Bottom-line is I want to avoid to concatenate the separate videos into one big video file and avoid the hick-ups you see when sequencing via a straightforward 'ended-event' way in JS.
Any suggestion much appreciated!
If you want a basic client side solution you can use two separate players or video tags in your web page, showing one and hiding the other.
The one that is visible plays the current video.
The other player loads starts and immediately pauses the next video.
When the first video ends, you hide that player and make the other one visible, un-pausing the video at the same time.
You then preload the next video into the original player and continue.
This technique is used successfully in some sites where ad breaks are mixed with the main video, as an example.

convert swf to video in Flash Pro CS5.5

I have searched the web and here for answers but so far, the links are dead, the how-tos no longer work for the version I have, or there are no answers.
I have a swf animation with full sound and scripting that I'd like to convert into a video or an flv. For some reason, the site I post on screws with my timeline somehow (the timing is off, sounds no longer match up properly with the text) so I thought a video would work better.
I tried using File>Export>Export to movie to resolve this. I tried to export to an AVI. When it's scaled down to 300x400 it works just fine (though it looks like total crap). However when I export at the full size, using full colors no compression, I get this.
I'm not sure what to do with it. It's slanted with lines through it and grayscale. VLC player is the only thing that will run it too. WMP dies with errors, saying it's an invalid or corrupt format. Funny thing is, the thumbnail for the video is exactly what it should look like.
I'm not sure what to do with it. Converting it to an .flv is just fine. I have a video converter for that. I just can't get it to convert to flv or even a movie type properly.
Why is it doing this to my video? Is there something better to use to convert? Is there a good one that won't plaster a giant watermark over it?
image being totally screwed up.
Flabaco is an online SWF to video converter. To answer your questions: It's free, doesn't impose banners or watermarks.
I have a swf animation with full sound and scripting that I'd like to
convert into a video or an flv.
Flabaco converts scripted content. It preserves the frame rate (fps) & color. It's capable of generating professional quality HD content.
It doesn't convert sound. Nonetheless the converted quality is good and you might be able to get by using another video tool to add sound to the converted video.
You can use the online converter app here: www.Flash-Banner-Converter.com
PS: There are some older posts on StackOverflow related to your question. Just search SWF to video / Flabaco.
Kayo,
FLABACO (FLAshBAnnerCOnverter)

Extracting an image from H.264 sample data (Objective-C / Mac OS X)

Given a sample buffer of H.264, is there a way to extract the frame it represents as an image?
I'm using QTKit to capture video from a camera and using a QTCaptureMovieFileOutput as the output object.
I want something similar to the CVImageBufferRef that is passed as a parameter to the QTCaptureVideoPreviewOutput delegate method. For some reason, the file output doesn't contain the CVImageBufferRef.
What I do get is a QTSampleBuffer which, since I've set it in the compression options, contains an H.264 sample.
I have seen that on the iPhone, CoreMedia and AVFoundation can be used to create a CVImageBufferRef from the given CMSampleBufferRef (Which, I imagine is as close to the QTSampleBuffer as I'll be able to get) - but this is the Mac, not the iPhone.
Neither CoreMedia or AVFoundation are on the Mac, and I can't see any way to accomplish the same task.
What I need is an image (whether it be a CVImageBufferRef, CIImage or NSImage doesn't matter) from the current frame of the H.264 sample that is given to me by the Output object's call back.
Extended info (from the comments below)
I have posted a related question that focusses on the original issue - attempting to simply play a stream of video samples using QTKit: Playing a stream of video data using QTKit on Mac OS X
It appears not to be possible which is why I've moved onto trying to obtain frames as images and creating an appearance of video, by scaling, compressing and converting the image data from CVImageBufferRef to NSImage and sending it to a peer over the network.
I can use the QTCapturePreviewVideoOutput (or decompressed) to get uncompressed frame images in the form of CVImageBufferRef.
However, these images references need compressing, scaling and converting into NSImages before they're any use to me, hence the attempt to get an already scaled and compressed frame from the framework using the QTCaptureMovieFileOutput (which allows a compression and image size to be set before starting the capture), saving me from having to do the expensive compression, scale and conversion operations, which kill CPU.
Does the Creating a Single-Frame Grabbing Application section of the QTKit Application Programming Guide not work for you in this instance?

Drawing video with text on top

I am working on an application and I have a problem I just cant seem to find a solution for. The application is written in vc++. What I need to do is display a YUV video feed with text on top of it.
Right now it works correctly by drawing the text in the OnPaint method using GDI and the video on a DirectDraw overlay. I need to get rid of the overlay because it causes to many problems. It wont work on some video cards, vista, 7, etc.
I cant figure out a way to complete the same thing in a more compatible way. I can draw the video using DirectDraw with a back buffer and copy it to the primary buffer just fine. The issue here is that the text being drawn in GDI flashes because of the amount of times the video is refreshed. I would really like to keep the code to draw the text intact if possible since it works well.
Is there a way to draw the text directly to a DirectDraw buffer or memory buffer or something and then blt it to the back buffer? Should I be looking at another method all together? The two important OS's are XP and 7. If anyone has any ideas just let me know and I will test them out. Thanks.
Try to look into DirectShow and the Ticker sample on microsoft.com:
DirectShow Ticker sample
This sample uses the Video Mixing Renderer to blend video and text. It uses the IVMRMixerBitmap9 interface to blend text onto the bottom portion of the video window.
DirectShow is for building filter graphs for playing back audio or video streams an adding different filters for different effects and manipulation of video and audio samples.
Instead of using the Video Mixing Renderer of DirectShow, you can also use the ISampleGrabber interface. The advantage is, that it is a filter which can be used with other renderers as well, for example when not showing the video on the screen but streaming it over network or dumping it to a file.

Resources