How to get image frames from video in Flutter in Windows? - windows

In my flutter desktop application I am playing video using dart_vlc package. I want to take image frames (like screenshot) from video file of different video positions. At first I tried with screenshot package where I found two problems.
It takes screenshot of the NativeVideo (A widget for showing video) widget only, but not includes the image frame inside the widget.
And video control buttons (play, pause, volume) comes up in the screenshot which I don't want.
There's some other packages for exporting image frames from video such as export_video_frame or video_thumbnail but both of them are not supported in Windows. I want to do the same thing they do in Windows.
So how to get image frames from a video file playing in a video player in Windows?

dart_vlc package itself provides a function to take snapshots from a video file currently paying in the video player. The general structure is
videoPlayer.takeSnapshot(file, width, height)
Eaxmple:
videoPlayer.takeSnapshot(File('C:/snapshots/snapshot1.png'), 600, 400)
This function call captures the video player's current position as an image and saves it in the mentioned file.

Related

Video was encoded with a new width + height along with the old one. Can I re-encode with just the old dimensions using ffmpeg?

I've got a video out of OBS that play's normally on my system if I open it with VLC for example, but when I import it into my editor (Adobe Premiere) it gets weirdly cropped down. When inspecting the data for the video it's because for some reason the video gets encoded with a new width and height over top of the old one! Is there a way using ffmpeg to re-encode/transcode the video to a new file with only the original width and height?
Bonus question: would there be a way for me to extract the audio channels from my video as separate .mp3s? There are 4 audio channels on the video
Every time you reencode a video you will lose quality. Scaling the video up will not reintroduce details that were lost when it was scaled down.

How to add colorProfile with AVAssetWriter to video recorded from screen using CGDisplayStream

I've written a screen-recording app that writes out H.264 movie files using VideoToolbox and AVWriter. The colors in the recorded files are a bit dull compared to the original screen. I know that this if because the colorProfile is not stored in the video-file.
This is closely related to How to color manage AVAssetWriter output
I have created a testbed to show this on github ScreenRecordTest
If you run this app you can start recording using CMD-R and stop it using the same CMD-R (You have to start and stop recording once to get a fully written movie-file). You will find the recording in your /tmp/ folder under a name similar like this: "/tmp/grab-2018-10-25 09:23:32 +0000.mov"
When recording the app shows two live images: a) the frame gotten from CGDisplayStream -and- b) the cmSampleBuffer that came out of the compressor.
What I found out is that the IOSurface that is returned from CGDisplayStream is not color-managed, so you'll notice the "dull" colors already before compression. If you un-comment line 89 in AppDelegate.swift
// cgImage = cgImage.copy(colorSpace: screenColorSpace)!
this live-preview will have the correct colors. Now this is only for displaying the IOSurface before compression. I have no idea how to make the other live-preview (after compression) (line 69 in AppDelegate) show correct colors (say: how to apply a colorProfile to a CMSampleBuffer) or most important how to tag the written video-file with the right profile so that when opening the .mov file I get the correct colors on playback.

ffmpeg: add single frame to end of video as images are acquired

I wish to add a frame to the end of a video just after it has been captured so I can make a timelapse video as the images are acquired.
So the idea is to take an image, use ffpmeg to make the video by adding each image just after it is aqcuired.
I've seen many questions about adding a set length of time of a logo type image or how to compile a whole bunch of single images to a video but not this.
Anyone got a good idea of what to try?

How can I overlay an image onto a video

How can I overlay an image onto a video without changing the video file?
I have many videos and I want to be able to open them and overlay a ruler onto them and then measure the distance an individual moved visually. All I want is to play a video and then to open up an image with some transparency and position the image over the video. This way i would be able to look at the video and see how far the individual moved.
I would like to do this without having to embed the image like a watermark, because that is computationally expensive. I would need to copy the video, embed it with the ruler and then watch the video, then delete that video file. This seems unnecessary. I would like to just watch the video and have a transparent image over it while I a watching.
Is there a program that does this all together?
Alternatively, is there a program which I can use to open an image and make it transparent and then move it over the video that is playing?
Note: I am using Windows.
It sounds form your requirements that simply overlaying a separate image layer over the video will meet your needs.
Implementing this approach will depend on the video player client you are using, but you could implement an HTML5 based solution and play the videos locally with this (or even from a URL on the web if you have them there).
There is a nice answer with a working fiddle which shows how to do this with HTML5 here: https://stackoverflow.com/a/31175193/334402
One thing to note - you have not mentioned scale in your question. If you need to measure how far the person has moved in real distance, rather than in just cm's across the video screen, then you will need to somehow work out the scale of the video. This makes things considerably harder as the video may zoom in and out during the sequence you want to measure, so you would need some reference to calculate the scale for each frame. One approach would be to use the individual as a reference, assuming they are in all the frames you are interested in.
What about using good old VLC for that?
Open VLC go to Tools→Effects and Filters→Video Effects→Overlay and select Add logo checkbox:
Then, add your transparent overlay image and play any video with VLC. The output looks like this:

How do I set the first and last frame of a video to be an image?

HTML 5 implementations are different across various browsers. In firefox, the image specified by the placeholder attribute will be shown until the user clicks play on the video. In chrome, the placeholder image is shown until the video is loaded (not played), at which point the first frame of the video is shown.
To reconcile this issue, I would like to set the first frame of the video to the placeholder image so that the experience will be the same in both browsers.
I would preferably do this using ffmpeg or mencoder. I have very limited experience using these however, so if someone could point me in the right direction, I would be much obliged.
Thanks!

Resources