Trouble with vlcj to play Video - vlcj

As part of a project for college, i have to be able to play video in my java app.
I've written the following code:
EmbeddedMediaPlayerComponent component = new EmbeddedMediaPlayerComponent();
JFrame f = new JFrame ();
f.setContentPane(component);
f.setBounds(new Rectangle (200,200,800,600));
f.addWindowListener(new WindowAdapter() {
public void windowClosing (WindowEvent e) {
component.release();
System.exit(0);
}
});
f.setVisible(true);
component.mediaPlayer().media().play("video");
Everything compiles successfully and when i run the project, the window for the video opens, i can hear the sound of the video but no image is shown. Can anyone help me fix this?

The EmbeddedMediaPlayerComponent needs an AWT Canvas to embed the video.
When running on macOS there is no heavyweight AWT so a normal embedded media player component will not work.
You need instead to use some form of "direct rendering" where you paint the video yourself into some lightweight component. vlcj provides an implementation of this with the CallbackMediaPlayerComponent
To get the basics working, you can simply replace your code:
EmbeddedMediaPlayerComponent component = new EmbeddedMediaPlayerComponent();
With:
CallbackMediaPlayerComponent component = new CallbackMediaPlayerComponent();
This will give you reasonable default behaviour, you can customise the CallbackMediaPlayerComponent if you need to.
The performance of this approach will not be as good as the embedded component, but for most use-cases it will be plenty good enough.

Related

VLCJ EmbeddedMediaPlayerComponent player still shows old video image after preparing a new video

The following is the way I load a video (in the actual code, the variables are member variables of the player class). I do not want the video to be played right away which is the reason I use prepareMedia(). When the application is ready to play the video, I call player.play().
However, my player view (I add EmbeddedMediaPlayerComponent to JPanel which is set as ContentPane of a JFrame) still shows the old video after running the following code with a new "videoPath" value. The player view shows the new video only after I call player.play().
EmbeddedMediaPlayerComponent mediaPlayerComponent = new EmbeddedMediaPlayerComponent();
MediaPlayer player = mediaPlayerComponent.getMediaPlayer();
player.prepareMedia(videoPath);
Is there any way I can get the player to show the new video image (or at least removing the old video image) without starting to play it? I tried calling methods such as repaint() from mediaPlayerComponent, stop() from player, in the overrided MediaPlayerEventAdpater methods such as mediaFreed(), but nothing I tried so far work.
It is a feature of VLC/LibVLC that the final frame of the video is displayed when the video ends, so you have to find a workaround.
A good solution is to use a CardLayout with two views, one for the media player component (or the Canvas used for the video surface) and another view simply with a blank (black) JPanel.
The idea then is to listen for the video starting/stopping/finishing and show the appropriate view in your card layout.
If you add a MediaPlayerEventListener and implement the playing, stopped, finished and error events you should cover all cases.
For example: in the "playing" event you switch your card layout to show the video view, in the "stopped", "finished" and "error" events you switch your card layout to show the blank view.
The view doesn't have to be black of course, you can do whatever you want like show an image.
Also note that the media player events will NOT be delivered on the Swing Event Dispatch Thread, so you will need to use SwingUtilities#invokeLater to properly switch your view.

Alternatives to using a MovieClip or BitmapData for an image?

I've been trying for two days to find an alternative to loading an image into my current project. I am using Adobe Flash Professional CS6 as my IDE and Animation program. I would like to be able to display an image in my application. What I am trying to do is have the image display onto the screen, the user enters the PLU associated with the image, and if the PLU is right then they receive a point. I have everything else already to go, but I just can't find an efficient way to deal with loading the image.
Right now I'm using this to accomplish getting my image on the display:
var myDisp:Layer0 = new Layer0();
var bmp:Bitmap = new Bitmap(myDisp);
spDispBox.addChild(bmp);
The above code works just find, but the limitation I can't get around is that I'm going to have to import each image into the library and then consecutively code each part in. I wanted to stick to OOP and streamline this process, I just don't know where I should turn to in order to accomplish my project goal. I'm more than happy to give more information. Thanks in advance, everyone.
July, 26, 2014 - Update: I agree, now, that XML is the way to go. I'm just having a hard time getting the grasp of loading an external XML file. I'm following along, but still not quite getting the idea. I understand about creating a new XML data object, Loader, and URLRequest. It's just loading the picture. I've been able to get output by using trace in the function to see that the XML is loaded, but when I go to add the XML data object to the stage I'm getting a null object reference.
I'm going to try a few more things, I just wanted to update the situation. Thanks again everyone.
it seems like these images are in your FLA library. To simplify your code you can make a singleton class which you can name ImageFactory (factory design pattern) and call that when needing an image which will return a Sprite (lighter than a MovieClip)
spDispBox.addChild( ImageFactory.getImageA() ); // returns a Sprite with your image
and in your ImageFactory
public function getImageA():DisplayObject {
var image:Layer0 = new Layer0(); // image from the FLA library
var holder:Sprite = new Sprite();
holder.addChild( new Bitmap( image ) );
return holder;
}
also recommend using a more descriptive name than Layer0

AS3 annoying issue with png as a cursor - stopDrag and startDrag

I've never run into this problem in the past, but when I use a png as my default cursor in an application i'm developing, it stops working with the startDrag and stopDrag.
When before I import the png of the simple vector graphic (a scribble) it works fine.
Do the alpha channels throw the code out?
If so, is there a work around that still allow me to use the png in this way?
It's a png of some tweezers with dropshadow.
// ------------------------ tweeze tool
Mouse.hide();
stage.addEventListener(MouseEvent.MOUSE_MOVE,follow);
function follow(evt:MouseEvent){
tweezed_one.x = mouseX;
tweezed_one.y = mouseY;
}
Your question is not so clear.
But what I understood is that you are not aware of mouseChildren while using custom cursor.
Add this line where you created it,
tweezed_one.mouseChildren = false;
Now you will be able to drag and drop.
For more info see mouseChildren

how to grab a OpenCV frame without showing it or without creating cvNamedWindow?

I do not understand why OpenCV doesn't work when we do not need to create OpenCV window using cvNamedWindow.
Actually, I do not want to use OpenCV GUI window, I want to use third party GUI in order to display grabbed frame, So for this, i do not need to create OpenCV window. But When I do not create OpenCV window, my application gets stuck, nothing works, and when I do create OpenCV window using cvNamedWindow, everything works fine.
Any suggestion, whats the reason? how can I grab OpenCV frame without creating its GUI window?
I am using OpenCV 2.4.3 (cvQueryFrame), VS2010 c++, WindowsXP
Thanks.
you probably need to skip the waitKey() call, too ;)
(also, do yourself a favour, and skip the c-api. it's a real PITA and will go away soon)
That's because you are grabbing images at a faster rate than what the camera can ouput. You need to add a little delay to your while cycle. If your camera does 25FPS, you should add ~1/25 of a second or so.
Solved: The problem was this, actually, I was creating a third party GUI window inside of the pthread which was causing its infinite updating. When I created window outside of the pthread, it works fine.
procedure is this:
void Init()
{
createGUIwin(w, h);
init_pthread();
}
void init_pthread(void*)
{
//createGUIwin(w, h); // before I was creating GUIwin here
while(ON)
{
frame = getOCVframe();
UpdateGUIwin(frame);
key = cvWaitKey(10);
}
}
Thanks everyone. I appreciate your answers.

Should I use NSOperation or NSRunLoop?

I am trying to monitor a stream of video output from a FireWire camera. I have created an Interface Builder interface with buttons and an NSImageView. While image monitoring is occurring within an endless loop, I want to:
change some camera parameters on the fly (gain, gamma, etc.)
tell the monitoring to stop so I can save an image to a file (set a flag that stops the while loop)
Using the button features, I have been unable to loop the video frame monitor, while still looking for a button press (much like using the keypressed feature from C.) Two options present themselves:
Initiate a new run loop (for which I cannot get an autoreleasepool to function ...)
Initiate an NSOperation - how do I do this in a way which allows me to connect with an Xcode button push?
The documentation is very obtuse about the creation of such objects. If I create an NSOperation as per the examples I've found, there seems to be no way to communicate with it with an object from Interface Builder. When I create an NSRunLoop, I get an object leak error, and I can find no example of how to create an autoreleasepool that actually responds to the RunLoop I've created. Nevermind that I haven't even attempted to choose which objects get sampled by the secondary run loop ...
Because Objective C is (obviously!) not my native tongue, I am looking for solutions with baby steps, sorry to say ...
Thanks in advance
I've needed to do almost exactly the same as you, only with a continuous video display from the FireWire camera. In my case, I used the libdc1394 library to perform the frame capture and camera property adjustment for our FireWire cameras. I know you can also do this using some of the Carbon Quicktime functions, but I found libdc1394 to be a little easier to understand.
For the video capture loop, I tried a number of different approaches, from a separate thread that polls the camera and has locks around shared resources, to using one NSOperationQueue for interaction with the camera, and finally settled on using a CVDisplayLink to poll the camera in a way that matches the refresh rate of the screen.
The CVDisplayLink is configured using the following code:
CGDirectDisplayID displayID = CGMainDisplayID();
CVReturn error = kCVReturnSuccess;
error = CVDisplayLinkCreateWithCGDisplay(displayID, &displayLink);
if (error)
{
NSLog(#"DisplayLink created with error:%d", error);
displayLink = NULL;
}
CVDisplayLinkSetOutputCallback(displayLink, renderCallback, self);
and it calls the following function to trigger the retrieval of a new camera frame:
static CVReturn renderCallback(CVDisplayLinkRef displayLink,
const CVTimeStamp *inNow,
const CVTimeStamp *inOutputTime,
CVOptionFlags flagsIn,
CVOptionFlags *flagsOut,
void *displayLinkContext)
{
return [(SPVideoView *)displayLinkContext renderTime:inOutputTime];
}
The CVDisplayLink is started and stopped using the following:
- (void)startRequestingFrames;
{
CVDisplayLinkStart(displayLink);
}
- (void)stopRequestingFrames;
{
CVDisplayLinkStop(displayLink);
}
Rather than using a lock on the FireWire camera communications, whenever I need to adjust the exposure, gain, etc. I change corresponding instance variables and set the appropriate bits within a flag variable to indicate which settings to change. On the next retrieval of a frame, the callback method from the CVDisplayLink changes the appropriate settings on the camera to match the locally stored instance variables and clears that flag.
Display to the screen is handled through an NSOpenGLView (CAOpenGLLayer introduced too many visual artifacts when updating at this rate, and its update callbacks ran on the main thread). Apple has some extensions you can use to provide these frames as textures using DMA for better performance.
Unfortunately, nothing that I've described here is introductory-level stuff. I have about 2,000 lines of code for these camera-handling functions in our software and this took a long time to puzzle out. If Apple could add the manual camera settings adjustments to the QTKit Capture APIs, I could remove almost all of this.
If all you're trying to do is see/grab the output of a connected camera, the answer is probably neither.
Use QTKit's QTCaptureView. Problem solved. Want to grab a frame? Also no problem. Don't try to roll your own - QTKit's stuff is optimized and part of the OS. I'm pretty sure you can affect camera properties as you wanted but if not, plan B should work.
Plan b: Use a scheduled, recurring NSTimer to ask QTKit to grab a frame every so often ("how" linked above) and apply your image manipulations to the frame (maybe with Core Image) before displaying in your NSImageView.

Resources