I've a C# program generating JPEG images in realtime, i need to (continuously) generate a video from the images and stream it (also in realtime).
I've used ffmpeg to transcode an input video source and stream it, doesn't ffmpeg have an option to get the input as a set of images(always being generated) and make the video out of it ?
Cheers
Actually I used VLC for the streaming....
Actually I just found at that I could:
ffmpeg -f image2 -i img%d.jpg /tmp/a.mpg
But i need to tell ffmpeg to keep doing it, I mean, if it doesn't find another image ffmpeg should wait for another one to be generated... is this possible ?
Related
I am trying to generate a video with libavformat/Libavcodec with a bunch of images that are in memory.
Can someone point me in the right direction, please?
Thanks in advance.
First, the basics of creating a video from images with FFmpeg is explained here.
If you simply want to change/force the format and codec of your video, here is a good start.
For the raw FFmpeg documentation you could use the Video and Audio Format Conversion, the Codec Documentation, the Format Documentation the and the image2 demuxer documentation (this demuxer will manage images as an input).
If you just want to take images and make a simple video out of it, just look at the 2 first links. FFmpeg's documentation gives you powerful tools but don't use them if you don't need them.
A sample command to create a video from images is:
ffmpeg -i image-%03d.png video.mp4
This will take all the files in sequence from image-000.png to the highest number available and make a video out of it.
You can force the format with the extension of the output file. To force the video codec use -c:v followed by a codec name available in the codec documentation.
I am investigating a possibility to store video streams which are coming from few sources already coded in h264 without video transcoding as the device I would like to use for this project won't be capable of transcoding combined video on the fly.
What I am looking for is two or more pictures side to side (not video concatenation) packed into mp4/avi/mkv.
I believe mkv container supports such kind of packaging but I've not been able to find appropriate options for ffmpeg or other tool to store it this way. What it does is very slow video transcoding into one big h264 stream.
If your player can handle it just make it perform the side-by-side view. No encoding or muxing required.
mpv video player
Example using mpv:
mpv --lavfi-complex="[vid1][vid2]hstack[vo];[aid1][aid2]amix[ao]" input1.mp4 --external-file=input2.mp4
The above example assumes each input has the same height. Otherwise you will have to add the scale, scale2ref, pad, and/or crop filters. Simple example using the crop filter to remove 20 pixels from the height:
mpv --lavfi-complex="[vid1]crop=iw:ih-20[c];[c][vid2]hstack[vo];[aid1][aid2]amix[ao]" input1.mp4 --external-file=input2.mp4
See the mpv documentation and FFmpeg Filters for more info.
Just specify multiple inputs.
ffmpeg -i [input 1] -i [input 2] ... -map 0 -map 1 ... -codec copy -f matroska [output]
As for the "side-to-side" part, it's up to the player to determine the presentation. If you don't control the player and you need a specific layout or presentation, then you must "burn" all these video streams into a new one and encode it as a new single stream.
i'm looking for a script that can convert a video in 2 formats for my website :
mp4 and Webm
i also want it to create a jpeg of the first frame and make all at 640*360
I'm a begginer with ffmpeg so i don't really know where to start. this is what i have for the moment, but that doesn't work
ffmpeg -i /tmp/video.off /tmp/video.webm /tmp/video.mp4
the ideal situation is to have a drag and drop conversion tool, but a folder based can do the trick too
Thank you
i am a beginner programmmer and am trying to implement ffmpeg. I am trying to convert a bunch of images to video and add a audio background. Can anybody help me and tell me how to loop the audio as required by the length of the video generated.
PS. This number of images varies so can we implement something that dynamically loops the audio as required
Use
ffmpeg -i images%d.jpg -f lavfi -i amovie=audio.mp3:loop=0,asetpts=N/SR/TB -shortest out.mp4
Source video is H264 in an mp4 container, I'm trying to split it into individual encoded frames. I tried with the following command line:
ffmpeg -i "input.mp4" -f image2 "%d.h264"
But that creates jpegs with the extension "h264", rather than actual H.264 frames.
It turns out the correct command line is:
ffmpeg -i "inputfile" -f image2 -vcodec copy -bsf h264_mp4toannexb "%d.h264"
There is no such thing as an "h264" image. H264 is a standard for video compression, and contains many different iterations, profiles, and also proprietary implementations of h264 encoders and decoders.
If you are trying to convert an avi video into an image sequence, you will need to determine what image format you want the exports to be. If you don't want to re-encode the media, you can use the -f image2 argument to specify an uncompressed image format. You can then save the outputs into something like a bmp, png, or tiff container. Alternatively, you can compress the images into something like a .jpg container (which perhaps FFmpeg defaulted to in your original command because you didn't tell it an image container that it understood).
.... edit: If for some reason you are trying to create a sequence of video files that only contain one frame each, it doesn't make any sense to compress them with h264. H264 is a temporally based encoding method and would require more than one frame. You could I guess make a sequence of uncompressed video files that only contain one frame each, but I can't imagine what the purpose for that would be when images would accomplish the same thing