I'm trying to extract image frames from a short clip of an IP cam. Specifically this clip.
http://db.tt/GQwu0nZ8
So, I'm trying to extract that frames with ffmpeg with this.
ffmpeg -i M00001.jpg -vcodec mjpeg -f image2 image%03d.jpg
I'm just getting only the first frame of the clip. How can I get the rest of frames? Can I use another tool to get that images?
Thank you
This may be too much but here's a short javascript program for Node.js https://nodejs.org/ that will strip all the readable frames out and save them as separate sequentially numbered jpg files in the current directory. Beware, even a short video clip can generate thousands of frames and the V8 javascript engine that Node uses is really fast so I recommend closing your file browser because it will hog resources trying to keep up.
If the video file is too large to create a buffer for, Node will issue an error and exit.
In that case the easiest thing to do would be to split the file into chunks with your shell utilities or a program like HexEdit. http://www.hexedit.com/
Rewriting this code to process the file asynchronously would fix that issue but writing asynchronous code still gives me anxiety.
var orgFile=process.cwd()+"/"+process.argv[2]; //Grab the video filename from the command line
var fs = require("fs"); //Load the filesystem module
var stats = fs.statSync(orgFile);//Get stats of video file
var fileSizeInBytes = stats["size"];//Get video file size from stats
var fdata=new Buffer(fileSizeInBytes);//Create a new buffer to hold the video data
var i=0;
var fStart=0;
var fStop=0;
var fCount=0;
fdata=fs.readFileSync(orgFile);//Read the video file into the buffer
//This section looks for the markers at the begining and end of each jpg image
//records their positions and then writes them as separate files.
while (i<fileSizeInBytes){
if (fdata[i]==0xFF){
//console.log("Found FF at "+i.toString);
if (fdata[i+1]==0xD8){
//console.log("Found D8 at "+(i+1).toString);
if (fdata[i+2]==0xFF){
//console.log("Found FF at "+(i+2).toString);
fStart=i;
}
}
}
if (fStart>0){
if (fdata[i]==0xFF){
if(fdata[i+1]==0xD9){
fStop=i+1;
}
}
if (fStart>0){
if (fStop>0){
fCount++;
fs.writeFileSync(orgFile+"."+fCount.toString()+".jpg",fdata.slice(fStart,fStop));
console.log (orgFile+"."+fCount.toString()+".jpg");
fStart=0;
fStop=0;
}
}
}
i++;
}
console.log ("Wrote "+fCount.toString()+" frames.");
If you save the above code as mjpeg_parse.js an invocation example would be:
Node mjepeg_parse.js videofile.avi
The command with ffmpeg works fine, but you need to specify mjpeg video as an input file. If M00001.jpg is a single jpg image, then you will get only one (the same) output image.
Related
I'd like to loop an audio watermark over a longer piece of audio, say every 20 seconds.
Right now I have mixed the two pieces of audio and the watermark plays at the very beginning:
command = ffmpeg(tempFilePath) // File path to base audio file
.setFfmpegPath(ffmpeg_static)
.input(tempWatermarkAudioPath) // File path to watermark audio file
// Need to loop the watermark
.complexFilter("amix=inputs=2:duration=longest")
.audioChannels(2)
.audioFrequency(48000)
.format('mp3')
.output(targetTempFilePath);
I have looked at ffmpeg: How to repeat an audio "watermark" and tried to do the following to no avail:
.complexFilter("amovie=" + tempWatermarkAudioPath + ":loop=0,asetpts=N/SR/TB[beep];[0][beep]amix=duration=longest,volume=2")
and
.complexFilter("amovie=[b]:loop=0,asetpts=N/SR/TB[b];[0][b]amix=duration=longest,volume=2")
In both these cases I get "File path not found" using Google Cloud Functions.
Any help would be greatly appreciated.
I just want some confirmation, because I have the sneaking suspicion that I wont be able to do what I want to do, given that I already ran into some errors about ffmpeg not being able to overwrite the input file. I still have some hope that what I want to do is some kind of exception, but I doubt it.
I already used ffmpeg to extract a specific frame into its own image file, I've set the thumbnail of a video with an existing image file, but I can't seem to figure out how to set a specific frame from the video as the thumbnail. I want to do this without having to extract the frame into a separate file and I don't want to create an output file, I want to edit the video directly and change the thumbnail using a frame from the video itself. Is that possible?
You're probably better off asking it in IRC zeronode #ffmpeg-devel.
I'd look at "-ss 33.5" or a more precise filter "-vf 'select=gte(n,1000)'" both will give same or very similar result at 30 fps video.
You can pipe the image out to your own process via pipe if you like of course without saving it : "ffmpeg ... -f jpeg -|..."
Im running into an speed optimization issue. Im building a video cut tool in web technologies on desktop with TideSDK. On of the tools has a timeline with a position slider
basically, whenever the slider moves, (using jquery UI), I get the position, translate this into a timecode and asks FFMPEG to encode to a file, when a get the finished event, I simply update the background-image attribute of the 'viewer' to this file. The file is located in some temporary folder.
The thing is, it is just a bit too slow. Usable, but slow (approx 2 fps on a High end Computer)
I think there are 2 bottlenecks on this strategy:
- Writing ffmpeg output to a file & reading back in css
- repeatedly loading the same movie file in ffmpeg
This is the code executed on each move (var timecode is the calculated timecode based on the pointer position)
var cmd = [FFMPEG];
cmd.push('-y'); //overwrite existing files
cmd.push('-ss',timecode); //CUE position
cmd.push('-i',input); //input file
cmd.push('-f','image2'); //output format
cmd.push('-vframes','1'); //number of images to render
cmd.push(Ti.API.Application.getDataPath( )+"/encoderframe.jpg"); //output file
var makeframe = Ti.Process.createProcess(cmd);
makeframe.setOnReadLine(function(data){ /*console.log(data);*/ });
var time = new Date().getTime();
makeframe.setOnExit(function(){ ffmpegrunning = false; $('#videoframe').css('background-image','url(file://'+Ti.API.Application.getDataPath( ).replace(" ","%20")+'/encoderframe.jpg?'+time+')'); });
makeframe.launch();
Basically, this repeatedly asks the same Command:
ffmpeg -y -ss 00:00:01.04 -i /somepath/somevideo.mov -f image2 -vframes 1 /path/to/output/encoderframe204.jpg
How can I optimize this code, Pipe to output straight to css background with Base64 data, or reuse loaded memory file in ffmpeg. ?
Thanks!
jpeg image
How is the above jpg image animated? As far as I know jpg format does not support animation.
No, the JPEG file format has no inherent support for animation.
The image you linked is actually an animated GIF disguised with a jpg file extension. (The browser apparently ignores even the MIME type and looks at the file header bytes in such cases.)
If you view the image in firefox, you can right-click on it and select properties:
You'll see Type: GIF image (animated, 54 frames)
Thus, it is a gif-image that has been renamed to .jpg.
For completeness, I'd like to point our that there's Motion-JPEG - sort of a jpg animation.
MJPEGs, usually produced by webcams, are a stream of JPEG files concatenated together, one after another, sometimes delimited by a HTTP header, and served by webcam-webservers with a MIME-Type of multipart/x-mixed-replace;boundary=, where boundary= defines the delimiter.
A search for animated JPEG related projects on github results in two findings:
In case people care about the size of an animated GIF, they strip it into separate JPG frames and tell the browser to exchange these frames in-place via some JavaScript code. For example. (Pawel's answer)
Then there's actually a proposed Animated JPEG standard, which stems from MJPEG and declares framerate and so forth in each JPG frame. Not probable to arrive in browsers anytime soon.
And lastly, I've seen image-hosters to replace large animated GIFs with a mp4 version of the GIF for presentation, plus some Javascript to serve the actual GIF for downloads/non-supported browsers.
And no, JPEG itself, via JFIF, does not offer a facility to animate a JPG file in itself, just as Noldorin already noted in the chosen answer. :shrug:
It is a GIF image... the extension has been changed by hand. Browser engine is smart enough to determine image format regardless of file extension.
var c = 1;
/* Preloading images */
var image1 = new Image();
image1.src = "a1.jpg";
var image2 = new Image();
image2.src = "a2.jpg";
var image3 = new Image();
image3.src = "a3.jpg";
var image4 = new Image();
image4.src = "a4.jpg";
var image5 = new Image();
image5.src = "a5.jpg";
function disp_img(w)
{
if (c == 6)
{
c = 1;
}
var img_src = "a" + c + ".jpg";
document.ani.src = img_src;
c++;
}
t = setInterval("disp_img(c)", 1000);
No JPEG doesn't support animation. Saving a GIF file with .jpeg extension doesn't male it a JPEG file. It's still a GIF file. Because OS Image viewer doesn't look into file extension it rather looks into the content.
If you open that file as binary (in a text editor) you will see the first line contains
GIF89ad�d�˜|� Which is the magic number for GIF.
Yes,
you can make animation using single jpeg. Google "jpeg css sprites". Of course this will not be native animation support by jpeg format.
A bit of a necro-post but since this question popped first when I tried to get info about pixel motion jpeg, here's some additional info.
Since Pixel2, Google created motion jpeg, which is an ordinary jpeg at the end of which there's an mp4 video.
More on this here:
https://android.jlelse.eu/working-with-motion-photos-da0aa49b50c
JPG does not animate. You either saw a series of JPG images rendered with javascript or you saw a GIF file named as a JPG. A web server and browser might still recognize the correct GIF filetype, even if the wrong extension has been added to the filename.
If you open the image file and if it is a sort of GIF format by using a hex editor, you see the following 4 bytes designating that image type is of GIF.
In what language can I write a quick program to take screenshots and also possibly emulate a keypress?
I have an animated/interactive flash movie that is a presentation. I want to take a screenshot after I press a particular key.
The end effect is a bunch of screenshots that I can print...basically captures the key moments in the flash presentation.
I've written this in C# without much hassle. Here's the bulk of the code:
using (Bitmap bitmap = new Bitmap(bitmapSize.Width, bitmapSize.Height, PixelFormat.Format24bppRgb))
using (Graphics graphics = Graphics.FromImage(bitmap))
{
graphics.CopyFromScreen(
new Point(0, 0),
new Point(0, 0),
bitmapSize);
bitmap.Save(filename, ImageFormat.Png);
}
I would recommend writing an app that hosts a browser control. Then you could have the browser control show the SWF and your app would know the exact coordinates of the part of the screen you need to capture. That way you can avoid having to capture a whole screen or whole window that you may have to crop later.
i am sure there are ways, but here's my idea. you can convert your movie frames to pictures using tools like ffmpeg . From the man page of ffmpeg
ffmpeg -i foo.avi -r 1 -s WxH -f image2 foo-%03d.jpeg
This will extract one video frame per second from the video and will output them in files named foo-001.jpeg, foo-002.jpeg, etc.
Images will be rescaled to fit the new WxH values.
If you want to extract just a limited number of frames, you can use the above command in combination with the -vframes or -t option,
or in combination with -ss to start extracting from a certain point in time.
The number in the file name "simulates" the key press, so if you extracted for 1 sec per frame, and you want to "press" the key at 30sec, use the file name with foo-030.jpeg
There's a free tool that I found about recently that does the screen capture part, It's apparently written in java.
http://screenr.com/