i'm new to windows phone 8 and need your help to capture screen activities in a video. I've to make a video of the activities performing on screen?
one solution to this that strike in me is to capture the screen in image form by dispatching a timer at a instance of time but this is not a right way to do as i've to make a video of screen activities? suggest your opinion how to handle this problem.
There's no built in way of doing what you want.
You will need two things:
Do a dispatch timer as you describe
Find code that will encode these frames into a movie. That's not an API that the phone supports - you will need to find existing code and use it. I am not aware of such code existing, but I have only looked for it once or twice and not very hard. You could, potentially, create an MJPG which is a fairly simple video format, but even that's not trivial and the ending file size can be prohibitive.
Related
Before I get shot down on this one, I realize that the 'how' answer for this question might be slightly debatable, however I'm more interested in the 'what'.
In a nut shell I want to know which methods I can use to interact with a PC video game interface. I want to create a program that can extract data from a video game market interface.
My first initial thought was that I would need to programmatically take screen shots and then use some Optical Character Recognition software to extract the text. Then run whatever operation on the extracted text to derive my incites.
Then I was thinking it might just be easier to have a bunch of mini screen shots that I just use to find matches on certain sections of the screen. When a match is found, I would then know what the text is on the screen, without having to actually 'extract' it.
For those out there whom have done this, can you point me in one direction or the other? Perhaps there is a method that I am completely unaware of.
If its the case that this question is not suitable for this forum. It would be much appreciated if you could direct me elsewhere.
Edit: I should probably add that I'm not looking to spend a fortune on this project... so any free software would be the best. Perhaps that's a tall order.
I'm starting to think Sikuli is the direction I'm going to go. Open Source image recognition software, integrates with Python, Ruby, Java, JDBC, JavaScript and more.
-- Expanding on the question --
There are basically 3 categories of tools:
Recorder while you manually work along your workflow, a recorder tracks your mouse and keyboard actions. After stopping the recording, you might playback (autorun your worflow). The recordings can usually be edited and augmented with additional features.
GUI aware the tool allows to programmatically operate on GUI elements like buttons. This is based on the knowledge of internal structures and names of the GUI elements and their features. Some of these tools also have a recording feature.
Visually the tool “sees” images (usually retangular pixel areas) on the screen and allows to act on these images using mouse and keyboard simulation. There might be some recorder feture as well with such a tool.
SikuliX belongs to the 3rd category and currently does not have a recorder feature.
Answer in progress...
In games with moddable UIs, like many MMOs, you could create a mod that streams data through a series of black and white squares that could be read with optical sensors. From there, a microcontroller could deliver the data back to the PC via USB or wifi.
My approach as a noob. First determine if OCR 100% needed, I think this plays a role in speed.
if possible:
-run game in window (allows for trouble shooting and easy troubleshooting)
-is there a high contrast option for game? Will help Sikuli find things
then you plan out your scenarios:
You have to create different functions for different situations. A lot of gaming is "do you see this?" Then "do this" until that is gone.
Start with small parts you want to automate then build on them. Making sure your parts can scale in case small change need to happen, they will. For instance you want to open the menu if you see an object, lets say a tree.
Assume you have some sort of walking algorithm.
setROI(region1) #focus here for tree
if exists(tRee):
click(loCation) #you could hit the shortcut key to opening the menu
click(iTem) #if the item moves in the menu then you may need to scroll to find it first or you can change the ROI and start seeing if sikuli can differentiate your item from one you dont want to click.
You would get that to loop into other actions and proceed. Goodluck.
I have a custom UISlider and use the currentPlaybackTime to change values of an MPMoviePlayerController object.
The problem is when i scrub at a fast rate using the slider, it doesn't respond as fast as i would like..
Is there any better way to have a fast interactive scrubber for ipad? targeting from OS 3.2
Well there are two issues, only one you can control directly.
Multimedia-content is commonly compressed using some kind of delta-compression, hence quick and exact seeking is not a trivial task to cope with. As that is common and since you can not directly change that, you will have to live with it.
the only way to increase responsiveness for seeking on the content-side (when encoding) is reducing the gop-size - that is, less p-frames between the i-frames.
when using a slider or a similar control, you could, instead of directly connecting the current playback position with it, handle any manual changes in an indirect fashion. You could run a timer based job that, whenever the slider/scrubber has been moved, tries to adjust the playback position towards that new value. Once the player is seeking, prevent the scrubber from getting feedback from the current playback location but allow it once the player is in playing state again. That way the user does not directly experience the clunky seek feedback.
I'm developing a WP7 app. I have a collection of BitmapImages that I load from the isolated storage.
Now I want to make a movie or animated GIF from those BitmapImages, is this possible? And if yes how?
An animated GIF is probably not possible because Silverlight does not work with gifs.
Yes Windows Phone 7 doesn't support animated GIFs. Here's a link to something similar that might be able to help you though:
Display GIFs in a Windows Phone 7 application
If you have a collection of images in isolated storage I'm guessing that you don't want to turn them into an animated gif on the phone anyway.
Just use a Timer to cycle through the images.
If you want greater control over the "playback" you could create an actual movie file but whether this is practical will depend on the images and how you get them into IS in the first place.
Using story board would be a good idea to create an animation. But the problem is how to export the animation. I have been thinking a solution for a long time.
I want to make a simple assistant for putting together AviSynth scripts. This would be a windows desktop application that would have a "preview" screen of an avi movie, which would give you a timeline, play, fast-forward, rewind, advance and go back frame-by-frame. The program would need to know the frame number of the current frame in the player and its filename.
What language is best suited for this? I know PHP ( I understand that this is not a contender ) and am familiar with Java. My thought is that the biggest hurdle with this project will be finding a library for the video playing features. With a cursory glance, no Java video libraries jumped out at me. My next thought would be c++ for this.
The output of this program would be an AviSynth script, a plaintext file which looks like this:
AviSource("myAvi.avi")
Crop(0, 0, 320, 240)
Blur(0.1)
There are a few tool kits that can do tihs:
C#: DirectShow (DirectX)
Java: JMF
If you have Avisynth installed, the only thing you need for preview (If I understood, that's your need) is something that can decode uncompressed video. It would open like a normal file. I'm sure there are video players implemented fairly well in Java, but I don't know how much functionallity from them you need. Anyway parsing scripts is not easy - I recommend you not to try to if you don't need to.
EDIT: I'm sorry, I thought you needed a very specific app, but from what you seem to need, you don't need to code anything, use AVSP!
Please watch this video, it shows how straightforward it is. It has advanced functions such as auto-completion, (even from your own auto-loading scripts!) syntax coloring, macros, automtic importing, drag&drop (of a video, for instance - just drag it to the video and AVSP makes the loading) scrit preview with zoom and all stuff, you can use automatic or custom sliders (you can make a slider that re-writes a number on the script in real time, for instance for hue/luminosity/contrast/etc. that would be cumbersome to control via script), checkboxes & radio buttons (for boolean values, etc...), text fields that alter strings in real time, and basically anything you need... Please check it out.
Also, VirtualDubMod is OLD.
And yep, AVSP is free, both gratis and libre! =)
I was wondering how do software like GotoMeeting capture desktop. I can do a full screen (or block by block) capture using GDI but that just seems too wasteful to me. Also I have looked into Mirror devices but I was wondering if there's a simpler technique or a library out there which does this.
I need fast and efficient desktop screen capture (10p15 fps) which I am eventually going to convert into a video file and integrate with my application to send the captured feed over the network or something.
Thanks!
Yes, taking a screen capture and finding the diff between previous capture would be a good way to reduce bandwidth on transmission by sending the changes across only, of course this is similar to video encoding techniques which does this block by block.
Still means you need to do a capture plus extra processing for getting the difference, i.e, encoding it.
by using the Mirror devices you can get both the updated Rectangle that are change and also pointer to the Screen. updated Rectangle pointer point to all the rectangle that are change , these rectangle are the change rectangle that are frequently changing. filter out some of the rectangle because in one second you can get 1000 of rectangles.
I would either:
Do full screen captures, and then
perform image processing to isolate
parts of the screen that have changed
to save bandwidth.
-OR-
Use a program like CamStudio.
i got 20 to 30 frame per second using memory driver i am display on my picture box but when i get full screen update then these frame are buffered. as picture box is slow and have change to my own component this is some how fast but not good on full screen as well averge i display 10 fps in full screen. i am facing problem in rendering frames i can capture 20 to 30 frames per second but my rendering is 8 to 10 full screen frames in one second . if any one has achive rendering frames in full screen please replay me.
What language?
.NET provides Graphics.CopyFromScreen.