Is there an easy way to automate the taking of screenshots? - windows

In what language can I write a quick program to take screenshots and also possibly emulate a keypress?
I have an animated/interactive flash movie that is a presentation. I want to take a screenshot after I press a particular key.
The end effect is a bunch of screenshots that I can print...basically captures the key moments in the flash presentation.

I've written this in C# without much hassle. Here's the bulk of the code:
using (Bitmap bitmap = new Bitmap(bitmapSize.Width, bitmapSize.Height, PixelFormat.Format24bppRgb))
using (Graphics graphics = Graphics.FromImage(bitmap))
{
graphics.CopyFromScreen(
new Point(0, 0),
new Point(0, 0),
bitmapSize);
bitmap.Save(filename, ImageFormat.Png);
}
I would recommend writing an app that hosts a browser control. Then you could have the browser control show the SWF and your app would know the exact coordinates of the part of the screen you need to capture. That way you can avoid having to capture a whole screen or whole window that you may have to crop later.

i am sure there are ways, but here's my idea. you can convert your movie frames to pictures using tools like ffmpeg . From the man page of ffmpeg
ffmpeg -i foo.avi -r 1 -s WxH -f image2 foo-%03d.jpeg
This will extract one video frame per second from the video and will output them in files named foo-001.jpeg, foo-002.jpeg, etc.
Images will be rescaled to fit the new WxH values.
If you want to extract just a limited number of frames, you can use the above command in combination with the -vframes or -t option,
or in combination with -ss to start extracting from a certain point in time.
The number in the file name "simulates" the key press, so if you extracted for 1 sec per frame, and you want to "press" the key at 30sec, use the file name with foo-030.jpeg

There's a free tool that I found about recently that does the screen capture part, It's apparently written in java.
http://screenr.com/

Related

Scalable solution for converting an image sequence to video

We are developing a stop motion app for kids and schools.
So what we have is:
A sequence of images and audio files (no overlapping audio, in v1. But there can be gaps between them)
What we need to do:
Combine the images to a video with a frame rate between 1-12 fps
Add multiple audio files at a given start times
encode with H265 to mp4 format
I would really like to avoid maintaining a VM or Azure batch jobs running ffmpeg jobs if possible.
Is there any good frameworks or third party APIs?
I have only found transloadit as the closes match but they don't have the option to add multiple audio files.
Any suggestions or experience in this area is very appreciated.
You've mentionned FFmpeg in your tag and it is a tool that checks all the boxes.
For the first part of your project (making a video from images) you should check this link. To sum up, you'll use this kind of command:
ffmpeg -r 12 -f image2 -i PATH_TO_FOLDER/frame%d.png PATH_TO_FOLDER/output.mp4
-r 12 being your framerate, here 12. You control the output format with the file extension. To control the video codec check out this link, you'll need to use the option -c:v libx265before the filename of your output.
With FFmpeg you add audio as you add video, with -i option followed by your filename. If you want to cut audio you should seek in your audio with -ss -t two options good for that. If you want and audio to start at a certain point, check out -itoffset, you can find a lot of examples.

Set specific frame as thumbnail for video?

I just want some confirmation, because I have the sneaking suspicion that I wont be able to do what I want to do, given that I already ran into some errors about ffmpeg not being able to overwrite the input file. I still have some hope that what I want to do is some kind of exception, but I doubt it.
I already used ffmpeg to extract a specific frame into its own image file, I've set the thumbnail of a video with an existing image file, but I can't seem to figure out how to set a specific frame from the video as the thumbnail. I want to do this without having to extract the frame into a separate file and I don't want to create an output file, I want to edit the video directly and change the thumbnail using a frame from the video itself. Is that possible?
You're probably better off asking it in IRC zeronode #ffmpeg-devel.
I'd look at "-ss 33.5" or a more precise filter "-vf 'select=gte(n,1000)'" both will give same or very similar result at 30 fps video.
You can pipe the image out to your own process via pipe if you like of course without saving it : "ffmpeg ... -f jpeg -|..."

How to add a Poster Frame to an MP4 video by timecode?

The mvhd atom or box of the original Quicktime MOV format supports a poster time variable for a timecode to use as a poster frame that can be used in preview scenarios as a thumbnail image or cover picture. As far as I can tell, the ISOBMFF-based MP4 format (.m4v) has inherited this feature, but I cannot find a way to set it using FFmpeg or MP4box or similar cross-platform CLI software. Edit: Actually, neither ISOBMFF nor MP4 imports this feature from MOV. Is there any other way to achieve this, e.g. using something like HEIFʼs derived images with a thmb (see Amendment 2) role?
The original Apple Quicktime (Pro) editor did have a menu option for doing just that. (Apple Compressor and Photos could do it, too).
To be clear, I do not want to attach a separate image file, which could possibly be a screenshot grabbed from a movie still, as a separate track to the multimedia container. I know how to do that:
Stackoverflow #54717175
Superuser #597945
I also know that some people used to copy the designated poster frame from its original position to the very first frame, but many automatically generated previews use a later time index, e.g. from 10 seconds, 30 seconds, 10% or 50% into the video stream.

generate thumbnail from the middle of every scenes changes in a video using ffmpeg or other software

Is there a way to generate thumbnails from scene changes using ffmpeg? Instead of picking the I-Frames but the middle of two I-Frames? This is assuming the middle of two major changes is usually the best time to take thumbnail shot from.
Yes, there is a way!
Execute this command:
ffmpeg -i yourvideo.mp4 -vf
select="eq(pict_type\,I)" -vsync 0 -an
thumb%03d.png
It is really simple. ffmpeg algorithms will do all the work and generate thumbnails from the middle of every scene change.
The items on bold:
yourvideo.mp4 (you need to change this to your video input)
thumb%03d.png (This is your output)

Generate all the files (.vtt + sprite) for the Tooltip Thumbnails options of Jwplayer

What is the best way to generate the ".VTT" file and the jpg sprite attached with it for the Tooltip Thumbnails of Jwplayer (http://www.jwplayer.com/blog/building-tooltip-thumbnails-with-encodingcom/- ?
I know how to make an image sprite with php, but i dont know how to make the screenshots of each video with the time in second.. I think there must be a server tool to do all the tasks it but i cant find it.
Thanks
I wrote a script to do this task. Given a video file (MP4 or M4v), generate thumbnail images, compress into a sprite, and generate a VTT file compatible with JWPlayer tooltip thumbnails. All of the image manipulation uses tools from ffmpeg, ImageMagick, and optionally sips and optipng. The WebVTT generation part, I had to write.
You will have to install ffmpeg & imagemagick, at a minimum to use this.
Github code is here: https://github.com/vlanard/videoscripts (under sprites/).
The basic gist is:
Create a bunch of thumbnails, e.g. every 45th second from a video
ffmpeg -i ../archive/myvideofile.mp4 -f image2 -bt 20M -vf fps=1/45 thumbs/myvideofile/tv%03d.png
Resize those thumbnails to be small, e.g. 100pixels wide
sips --resampleWidth 100 thumbs/myvideofile/tv001.png thumbs/myvideofile/tv002.png thumbs/myvideofile/tv003.png
OR if sips not available, use imageMagick utility:
mogrify -geometry 100x thumbs/myvideofile/tv001.png thumbs/myvideofile/tv002.png thumbs/myvideofile/tv003.png
Get the height & width dimensions of one of the thumbnails to use as the basis of our grid coordinates, using ImageMagick utility
identify -format "%g - %f" thumbs/myvideofile/tv001.png
which returns output like :
100x55+0+0 - tv001.png
from which we parse 100 and 55 as our Width & Height, and the general geometry of each thumbnail (W, H, X, Y)
We then generate our single spritemap from the individual thumbnails. We determine the target grid size (e.g. 2x2, 8x8) to suit the number of thumbnails we generated for this video, as well as passing in the sprite geometry, using an ImageMagick utility
montage thumbs/myvideofile/tv*.png -tile 2x2 -geometry 100x55+0+0 thumbs/myvideofile/myvideofile_sprite.png
Optionally we can run an extra compression step here to make the sprite smaller
optipng thumbs/myvideofile/myvideofile_sprite.png
We then generate a VTT file based on the number of thumbnails we created, using
the interval that we used to space out the thumbnails to label each time segment, and
using the known coordinates of each consecutive image within our sprite that maps to
the associated segment.
I've developed a Ruby gem to easily create .VTT file and sprite of thumbnails.
Thanks for inspiring #randalv!
You can take a look at it here:
https://github.com/scaryguy/jwthumbs
Usage
Instantiate your video file:
movie = Jwthumbs::Movie.new("YOUR_VIDEO.mp4")
Jwthumbs::Movie.new accepts second parameter as a options hash. You can configure several stuff at the same time you instantiate your video like this:
movie = Jwthumbs::Movie.new("YOUR_VIDEO.mp4", seconds_between: 60, sprite_name: "my_sprite_name.jpg")
or after you instentiated your video, you can use Jwthumbs::Movie file to configure things:
movie = Jwthumbs::Movie.new("YOUR_VIDEO.mp4")
movie.seconds_between = 60
movie.sprite_name = "my_sprite_name.jpg"
and then to create your thumbnails and .VTT file just run this command.
movie.create_thumbs!
I know this is already a few years old but I had the same problem and found a command line tool which generates sprites pretty fast and since 1.0.6 supports WebVTT creation out of the box. The name is mt and you can check it here.
Quoting from their documentation you can use it like this:
just run mt and provide any video file as args: mt video.avi
Some of the settings can be changed through runtime flags provided
directly to mt for more information just run mt --help
Option 1 :
You can use the encoding.com's API and tell them to export vtt file too
I recommend to read "How can I create time synced thumbnails for use in JW player?" explanation from encoding.com's Knowledge base
Option 2 :
use movie thumbnailer (mtn), this is a command line tools running on UNIX, Windows systems. But you will have to write a custom script to generate the VTT file corresponding
Super fast! Thanks to FFmpeg's libavcodec.
Command line program: canbe used on remote connections to co-location servers, or used in scripts.
Batch mode: recursively search directories for movie files. Run at lower priority (nice 10 on Linux, idle on Windows) by default.
To run at normal priority use -n option.
Thumbnails are group together in one jpeg file and can be saved individually too (-I
option).
Work fine with Unicode filenames in both Linux & Windows
(might need to change the font with -f fontfile).

Resources