Image and video fullscreen? - windows-phone-7

)
I have a application to show some videos and images.. like a presentation.. The images have different resolution's and the videos to. I want to put the images and the videos in full screen mode but without lose quality.. The rotation of screen is fine, but the media content does not appear like they should..
Basically i want to show the images centered in vertical and horizontal without lose quality..
Suggestions?

For the video I would recommend using a MediaElement with the Stretch attribute set to 'Uniform'. 'Uniform' takes up all the space that you give the control but it ensures that the video maintains its aspect ratio. You should still have all the qaulity possible because the stretching happens on the GPU and it does a great job. You can see an example here:
http://msdn.microsoft.com/en-us/library/system.windows.controls.mediaelement.aspx
Now, if you want the video to only scale up to it's orignial size but not get blown up any larger, then just set the Stretch property to 'None'.
The Image control works the same way and also has the same Stretch property. See the Image class documentation and sample here:
http://msdn.microsoft.com/en-us/library/system.windows.controls.image(v=VS.95).aspx

Did you try the MediaElement API and the NaturalVideoHeight and NaturalVideoWidth properties?

Related

why bodymovin result animation looks more stretched than the source frames

im making an animation of some product that listens to you and reacts accordingly,
however, i want to upload my animation to my webflow project
my animation resolution is 1080x720, however i export the keyframes as PNG images (like webflow tutorial recomends) and then i import those images inside a new After Effects project and then i export the animation (I would like to say that I follow each step of the tutorial exactly as it is) but the problem comes when i test my result json inside LottieFiles previewer, the animation looks stretched (i cant explain it so ill upload 2 images to show the problem)
the original frame is a png image used in the bodymovin sequence
the json output frame is a base64 image (the first frame of animation) stored in the bodymovin animation result data.json
the two images above are the same resolution but looks diferents, i want to know why and how to fix it
thanks in advance
link to the original webflow tutorial that i follow
sorry this was just a problem of configuration, i figured out how to fix this, i just have to set bodymovin settings > assets > "Copy original a Assets" turn on, in fact, bodymovin use a low-level AI that remove the white / transparent padding and expand the content, enabling original Copy forces bodymovin to avoid using that AI

How can I overlay an image onto a video

How can I overlay an image onto a video without changing the video file?
I have many videos and I want to be able to open them and overlay a ruler onto them and then measure the distance an individual moved visually. All I want is to play a video and then to open up an image with some transparency and position the image over the video. This way i would be able to look at the video and see how far the individual moved.
I would like to do this without having to embed the image like a watermark, because that is computationally expensive. I would need to copy the video, embed it with the ruler and then watch the video, then delete that video file. This seems unnecessary. I would like to just watch the video and have a transparent image over it while I a watching.
Is there a program that does this all together?
Alternatively, is there a program which I can use to open an image and make it transparent and then move it over the video that is playing?
Note: I am using Windows.
It sounds form your requirements that simply overlaying a separate image layer over the video will meet your needs.
Implementing this approach will depend on the video player client you are using, but you could implement an HTML5 based solution and play the videos locally with this (or even from a URL on the web if you have them there).
There is a nice answer with a working fiddle which shows how to do this with HTML5 here: https://stackoverflow.com/a/31175193/334402
One thing to note - you have not mentioned scale in your question. If you need to measure how far the person has moved in real distance, rather than in just cm's across the video screen, then you will need to somehow work out the scale of the video. This makes things considerably harder as the video may zoom in and out during the sequence you want to measure, so you would need some reference to calculate the scale for each frame. One approach would be to use the individual as a reference, assuming they are in all the frames you are interested in.
What about using good old VLC for that?
Open VLC go to Tools→Effects and Filters→Video Effects→Overlay and select Add logo checkbox:
Then, add your transparent overlay image and play any video with VLC. The output looks like this:

Disable color correction in Firefox programatically per image?

this question is in close relation to Firefox 3.5 color correction hack?
The situation I have is that there's a canvas game of mine, and the images that are used in it carry additional information about their shape, connection points etc. This information is stored in the PNG image itself, using meaningful colours (eg RGB(255,255,0) for connection point).
Loading element and painting on the canvas creates Image object, img.src is set, and in img.load function I preprocess image data reading the sensitive information (and removing sensitive pixels from the image data before painting to canvas).
The problem: In FF, the pixel which was supposed to be 255,255,0 is actually 255,254,0. I don't have problems with FF color correction (I don't care if the displayed image has right colors, or slightly modified), but I'd expect that getting image data gives me uncorrected data. I'm looking for a solution which would not involve changing images on the server. Is there some way? Eg.
img.setColorProfile(), or
img.disableColorCorrection(), or
img.getImageData(disableColorCorrection) or img.getImageData(colorProfile)?
The problem might have do more with image loading than image drawing.
I think the proper solution is to strip out color profile information from the images (which you seem to want to aovid). If possible server another image resources for Firefox if you cannot need to have the original data intact.
http://f6design.com/journal/2006/12/01/fixing-png-gamma/
Also, you could decode PNG immages in pure Javascript if the server is co-operate and allows CORS and AJAX loading of the images. You decode the image in Javascript using png.js and create a source <canvas> from the image data (instead of <img>). This way it's you in the control what RGB values comes out from each PNG pixel.
https://github.com/devongovett/png.js

Where does directshow get image dimensions from?

We are using a directshow interface to capture images from a video stream. These images are presented in a fixed size window.
Once we have captured an image we store it as a bitmap. Downstream we have the ability to add annotation to the image, for example letters in a fixed size font.
In one of our desktop environments, the annotation has started appearing at half the size that it normally appears at. This implies that the image we are merging the text onto has dimensions that are maybe twice as large.
The system that this happens on is a shared resource as in some unknown individual has installed software on the system that differs from our baseline.
We have two approaches - the 1st is to reimage the system to get our default text size behaviour back. The 2nd is to figure out how directshow manages image dimensions so that we can set the scaling on the image correctly.
A survey of the directshow literature indicates that the above is not a trivial task. The original work was done by another team that did not document what they did. Can anybody point us in the direction of what directshow object we want to deal with to properly size the sampled image?
DirectShow - as a framework - does not deal with resolutions directly. Your video source (such as capture hardware) is capable of providing video feed in certain resolution which you possibly can change. You normally use IAMStreamConfig as described in Configure the Video Output Format in order to choose capture resolution.
Sometimes you cannot affect capture resolution and you need to resample the image in whatever dimensions you captured it. There is no stock filter for this, however Media Foundation provides a suitable Video Resizer DSP which does most of the task. Unfortunately it does not fit DirectShow pipeline smoothly, so you need fitting and/or custom filter for resizing.
When filters connect in DirectShow, they have an AM_MEDIA_TYPE. Here you will find a VIDEOINFOHEADER with a BITMAPINFOHEADER and this header has a biWidth and biHeight.
Try to build the FilterGraph manually (with GraphEdit or GraphStudioNext) and inspect these fields.

How do I set the first and last frame of a video to be an image?

HTML 5 implementations are different across various browsers. In firefox, the image specified by the placeholder attribute will be shown until the user clicks play on the video. In chrome, the placeholder image is shown until the video is loaded (not played), at which point the first frame of the video is shown.
To reconcile this issue, I would like to set the first frame of the video to the placeholder image so that the experience will be the same in both browsers.
I would preferably do this using ffmpeg or mencoder. I have very limited experience using these however, so if someone could point me in the right direction, I would be much obliged.
Thanks!

Resources