Source video is H264 in an mp4 container, I'm trying to split it into individual encoded frames. I tried with the following command line:
ffmpeg -i "input.mp4" -f image2 "%d.h264"
But that creates jpegs with the extension "h264", rather than actual H.264 frames.
It turns out the correct command line is:
ffmpeg -i "inputfile" -f image2 -vcodec copy -bsf h264_mp4toannexb "%d.h264"
There is no such thing as an "h264" image. H264 is a standard for video compression, and contains many different iterations, profiles, and also proprietary implementations of h264 encoders and decoders.
If you are trying to convert an avi video into an image sequence, you will need to determine what image format you want the exports to be. If you don't want to re-encode the media, you can use the -f image2 argument to specify an uncompressed image format. You can then save the outputs into something like a bmp, png, or tiff container. Alternatively, you can compress the images into something like a .jpg container (which perhaps FFmpeg defaulted to in your original command because you didn't tell it an image container that it understood).
.... edit: If for some reason you are trying to create a sequence of video files that only contain one frame each, it doesn't make any sense to compress them with h264. H264 is a temporally based encoding method and would require more than one frame. You could I guess make a sequence of uncompressed video files that only contain one frame each, but I can't imagine what the purpose for that would be when images would accomplish the same thing
Related
I am investigating a possibility to store video streams which are coming from few sources already coded in h264 without video transcoding as the device I would like to use for this project won't be capable of transcoding combined video on the fly.
What I am looking for is two or more pictures side to side (not video concatenation) packed into mp4/avi/mkv.
I believe mkv container supports such kind of packaging but I've not been able to find appropriate options for ffmpeg or other tool to store it this way. What it does is very slow video transcoding into one big h264 stream.
If your player can handle it just make it perform the side-by-side view. No encoding or muxing required.
mpv video player
Example using mpv:
mpv --lavfi-complex="[vid1][vid2]hstack[vo];[aid1][aid2]amix[ao]" input1.mp4 --external-file=input2.mp4
The above example assumes each input has the same height. Otherwise you will have to add the scale, scale2ref, pad, and/or crop filters. Simple example using the crop filter to remove 20 pixels from the height:
mpv --lavfi-complex="[vid1]crop=iw:ih-20[c];[c][vid2]hstack[vo];[aid1][aid2]amix[ao]" input1.mp4 --external-file=input2.mp4
See the mpv documentation and FFmpeg Filters for more info.
Just specify multiple inputs.
ffmpeg -i [input 1] -i [input 2] ... -map 0 -map 1 ... -codec copy -f matroska [output]
As for the "side-to-side" part, it's up to the player to determine the presentation. If you don't control the player and you need a specific layout or presentation, then you must "burn" all these video streams into a new one and encode it as a new single stream.
I have a quick time video file, video stream is in motion jpeg format, I extract every frame in the file with
ffmpeg -i a.mov -vcodec copy -f image2 %d.jpg
I found that in every jpeg file, there are actually two FFD8 marker, which means there are actually two images in one single jpeg file.
Is this correct? Is the file interlaced? Anything special need to pass to codec?
Yes, motion Jpeg supports interlaced format. If the jpeg file is half of the full video size, will mean that the mov is interlaced, and you cannot use -vcodec copy to extract the frames. Try ffmpeg -deinterlace or use yadif filter.
I would like to transcode video stream using ffmpeg tool and change only the video stream resolution, i.e. the video and audio parameters should remain the same.
According to the man page of the ffmpeg the following command line should provide the desired result:
ffmpeg -i input.mp4 -vcodec copy -acodec copy -s WxH output.avi
The Video codec of the input stream is compatible with avi container.
The actual result is that the resolution remains unchanged and it seems that the stream is just repacked in avi container.
The resolution of the output stream is changed successfully without -vcodec copy option, but the video codec is changed: h264 (Constrained Baseline) - > mpeg4 (Simple Profile).
When you copy a video stream, you cannot change any of its paramters, sinceā¦ well, you're copying it. ffmpeg won't touch it in any way, so it can't change the dimensions, frame rate, et cetera.
Also, ffmpeg always chooses a default video codec if you don't specify one. For AVI files, that's mpeg4.
If you want H.264 video, choose -c:v libx264 instead (or -vcodec libx264 which is the same). If you need to keep the original profile, use -profile:v baseline.
Two things:
When you change the size, you will recode the video. This lowers the quality and might considerably harm the video. To compensate for this, you might need to set a higher quality level. You do this by setting the Constant Rate Factor to anything below the default of 23, e.g. with -crf 20. Experiment and see how your video looks like. If you have the time, add the -preset slow (or slower, veryslow), which will give you better compression.
Not that it matters in your case, since your input uses the Constrained Baseline profile, but note that H.264 in AVI is not properly supported, at least when using B pictures. Baseline doesn't support B pictures though, so you should be fine. It could happen that file can't be played back on some devices or players if you use the Main profile or anything above. I would rather mux it into an MP4 or MKV container, especially if your input file is MP4 anyway.
Mencoder has a lovely option for converting a mjpeg file into an avi file with an 'MJPG' codec that plays in VLC.
The command line to do this is:
mencoder filename.mjpeg -oac copy -ovc copy -o outputfile.avi -speed 0.3
where 0.3 is the ratio of the desired play framerate to the default 25 fps. All this does is make a copy of the mjpeg file, put an avi header on top and at the end, what seems to be an index of the frame positions in the file.
I want to replicate this in my own code, but I can't find documentation anywhere. What is the exact format of the index section? The header has extra filler bytes in it for some reason - whats this about?
Anyone know where I can find documentation? Both mencoder and vlc seem to have this codec built in.
After much work, study and fiddling around with HxD and RiffPad, I finally figured it out. It would take a long blog entry to explain it all, but basically there isn't really an 'MJPG' codec out there - mjpg just uses a few tricks and unusual parts of the avi standard to produce an indexed file.
The key is to place '00dc' and an Int32 length tag 8 bytes in front of each Jpeg open tag. If you want the avi to be random access, then you need an index at the end which points to each of the '00dc' tag positions.
VLC will play this natively. If you have ffmpeg installed, then Windows Media Player uses that to decode these types of mjpg files.
I've a C# program generating JPEG images in realtime, i need to (continuously) generate a video from the images and stream it (also in realtime).
I've used ffmpeg to transcode an input video source and stream it, doesn't ffmpeg have an option to get the input as a set of images(always being generated) and make the video out of it ?
Cheers
Actually I used VLC for the streaming....
Actually I just found at that I could:
ffmpeg -f image2 -i img%d.jpg /tmp/a.mpg
But i need to tell ffmpeg to keep doing it, I mean, if it doesn't find another image ffmpeg should wait for another one to be generated... is this possible ?