is it possible to send ffmpeg images by using pipe? - ffmpeg

I want to send images as input to ffmpeg and I want ffmpeg to output video to a stream (webRtc format.)
I found some information that from my understanding showed this is possible. - I believe that ffmpeg could receive image from a pipe, does anyone know how this can be done ?

"I want to send images as input to FFmpeg... I believe that FFmpeg could receive image from a pipe, does anyone know how this can be done?"
Yes it's possible to send FFmpeg images by using a pipe. Use the standardInput to send frames. The frame data must be uncompressed pixel values (eg: 24bit RGB format) in a byte array that holds enough bytes (widthxheightx3) to write a full frame.
Normally (in Command or Terminal window) you set input and output as:
ffmpeg -i inputvid.mp4 outputvid.mp4.
But for pipes you must first specify the incoming input's width/height and frame rate etc. Then aso add incoming input filename as -i - (where by using a blank - this means FFmpeg watches the standardInput connection for incoming raw pixel data.
You must put your frame data into some Bitmap object and send the bitmap values as byte array. Each send will be encoded as a new video frame. Example pseudo-code :
public function makeVideoFrame ( frame_BMP:Bitmap ) : void
{
//# Encodes the byte array of a Bitmap object as FFmpeg video frame
if ( myProcess.running == true )
{
Frame_Bytes = frame_BMP.getBytes(); //# read pixel values to a byte array
myProcess.standardInput.writeBytes(Frame_Bytes); //# Send data to FFmpeg for new frame encode
Frame_Bytes.clear(); //# empty byte array for re-use with next frame
}
}
Anytime you update your bitmap with new pixel information, you can write that as a new frame by sending that bitmap as input parameter to the above function eg makeVideoFrame (my_new_frame_BMP);.
Your pipe's Process must start with these arguments:
-y -f rawvideo -pix_fmt argb -s 800x600 -r 25 -i - ....etc
Where...
-f rawvideo -pix_fmt argb means accept uncompressed RGB data.
-s 800x600 and -r 25 are example input width & height, r sets frame rate meaning FFmpeg must encode this amount of images per one second of output video.
The full setup looks like this:
-y -f rawvideo -pix_fmt argb -s 800x600 -r 25 -i - -c:v libx264 -profile:v baseline -level:v 3 -b:v 2500 -an out_vid.h264
If you get blocky video output try setting two output files...
-y -f rawvideo -pix_fmt argb -s 800x600 -r 25 -i - -c:v libx264 -profile:v baseline -level:v 3 -b:v 2500 -an out_tempData.h264 out_vid.h264
This will output a test h264 video file which you can later put inside an MP4 container. The audio track -i someTrack.mp3 is optional.
-i myH264vid.h264 -i someTrack.mp3 outputVid.mp4

Related

Converting images to video keeping GOP 1 using ffmpeg

I have a list of images, containing incremental integer values saved in png format starting from number 1, which need to be converted to a video with GOP 1 using ffmpeg. I have used the following command to convert the images to video and subsequently used ffplay to seek to a particular frame. The displayed frame doesn't match the frame being seek. Any help?
ffmpeg -i image%03d.png -c:v libx264 -g 1 -pix_fmt yuv420p out.mp4

How to force ffmpeg to refresh overlay image more often?

I am trying to do sports live-streaming using ffmpeg. Score of a streaming match is being fetched from server and converted to png. This png must appear on top of the video.
ffmpeg allows to put an overlay over a video stream using image2 demuxer. If I use -loop1, this overlay updates approximately every 5 seconds. How can I force ffmpeg to read it from disk more often?
My current attempt with overlay updating once in 5 seconds(mp4 video for testing purposes):
nice -n -19 ffmpeg \
-re -y \
-i s.mp4 \
-f image2 -loop 1 -i http://127.0.0.1:3000/img \
-filter_complex "[0:v][1:v]overlay" \
-threads 4 \
-v 0 -f mpegts -preset ultrafast udp://127.0.0.1:23000 \
&
P.S
I know, that I can make youtube streaming widget on the website and put score on top of it just using html/css/js. But unfortunately it must be done directly in the video stream.
P.P.S
I know, that I can use ffmpeg drawtext. But it is not what I want. I have specially designed png, which must be updated as frequently, as possible ( once in 1-2 seconds would be just great )
Three things:
1) -re is applied per input, so ffmpeg is currently reading your image at a rate asynchronous with respect to the video. Since the video is being read in real time, the image reader queues the packets of the looped image till the filtergraph can consume them. So the updated image will consumed much later and with a greater timestamp assigned than when it was actually updated. Add -re before the image -i to correct this.
2) Skip -loop 1 and use -stream_loop -1 since the image2 demuxer can abort if the input is blocked or empty (due to update) when it's trying to read it. Although, since the input is read via a network protocol, this may not be an issue for you.
3) You've specified no encoder in the output options. Since the format is MPEG-TS, ffmpeg will choose mpeg2video with a default bitrate of 200 kbps. The ultrafast preset does not apply to this encoder. You probably want to add -c:v libx264.
I have found, that increasing framerate value of image2 to 90-100 makes file reading process faster, but audio becomes throttled

Sync files timestamp with ffmpeg

I'm capturing video from 4 cameras connected with HDMI through a capture card. I'm using ffmpeg to save the video feed from the cameras to multiples jpeg files (30 jpeg per second per camera).
I want to be able to save the images with the capture time. Currently I'm using this command for one camera:
ffmpeg -f video4linux2 -pixel_format yuv420p -timestamps abs -I /dev/video0 -c:a jpeg -t 60 -ts_from_file 2 camera0-%5d.jpeg
It saves my file with the names camera0-00001.jpg, camera0-00002.jpg, etc.
Then I rename my file with camera0-HH-mm-ss-(1-30).jpeg based on the modified time of the file.
So in the end I have 4 files with the same time and same frame like this:
camera0-12-00-00-1.jpeg
camera1-12-00-00-1.jpeg
camera2-12-00-00-1.jpeg
camera3-12-00-00-1.jpeg
My issue is that the file may be offset from one to two frame. They may have the same name but sometime one or two camera may show different frame.
Is there a way to be sure that the capture frames has the actual time of the capture and not the time of the creation of the file?
You can use the mkvtimestamp_v2 muxer
ffmpeg -f video4linux2 -pixel_format yuv420p -timestamps abs -copyts -i /dev/video0 \
-vf setpts=PTS-STARTPTS -vsync 0 -vframes 1800 camera0-%5d.jpeg \
-c copy -vsync 0 -vframes 1800 -f mkvtimestamp_v2 timings.txt
timings.txt will have output like this
# timecode format v2
1521177189530
1521177189630
1521177189700
1521177189770
1521177189820
1521177189870
1521177189920
1521177189970
...
where each reading is the Unix epoch time in milliseconds.
I've switched to output frame count limit to stop the process instead of -t 60. You can use -t 60 for the first output since we are resetting timestamps there, but not for the second. If you do that, remember to only use the first N entries from the text file, where N is the number of images produced.

Using FFMPEG to losslessly convert YUV to another format for editing in Adobe Premier

I have a raw YUV video file that I want to do some basic editing to in Adobe CS6 Premiere, but it won't recognize the file. I thought to use ffmpeg to convert it to something Premiere would take in, but I want this to be lossless because afterwards I will need it in YUV format again. I thought of avi, mov, and prores but I can't seem to figure out the proper command line to ffmpeg and how to ensure it is lossless.
Thanks for your help.
Yes, this is possible. It is normal that you can't open that raw video file since it is just raw data in one giant file, without any headers. So Adobe Premiere doesn't know what the size is, what framerate ect.
First make sure you downloaded the FFmpeg command line tool. Then after installing you can start converting by running a command with parameters. There are some parameters you have to fill in yourself before starting to convert:
What type of the YUV pixel format are you using? The most common format is YUV4:2:0 planar 8-bit (YUV420p). You can type ffmpeg -pix_fmts to get a list of all available formats.
What is the framerate? In my example I will use -r 25 fps.
What encoder do you want to use? The libx264 (H.264) encoder is a great one for lossless compression.
What is your framesize? In my example I will use -s 1920x1080
Then we get this command to do your compression.
ffmpeg -f rawvideo -vcodec rawvideo -s 1920x1080 -r 25 -pix_fmt yuv420p -i inputfile.yuv -c:v libx264 -preset ultrafast -qp 0 output.mp4
A little explanation of all other parameters:
With -f rawvideo you set the input format to a raw video container
With -vcodec rawvideo you set the input file as not compressed
With -i inputfile.yuv you set your input file
With -c:v libx264 you set the encoder to encode the video to libx264.
The -preset ultrafast setting is only speeding up the compression so your file size will be bigger than setting it to veryslow.
With -qp 0 you set the maximum quality. 0 is best, 51 is worst quality in our example.
Then output.mp4 is your new container to store your data in.
After you are done in Adobe Premiere, you can convert it back to a YUV file by inverting allmost all parameters. FFmpeg recognizes what's inside the mp4 container, so you don't need to provide parameters for the input.
ffmpeg -i input.mp4 -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 1920x1080 -r 25 rawvideo.yuv

rawvideo and rgb32 values passed to FFmpeg

I'm converting a file to PNG format using this call:
ffmpeg.exe -vframes 1 -vcodec rawvideo -f rawvideo -pix_fmt rgb32 -s <width>x<height> -i infile -f image2 -vcodec png out.png
I want to use a converter that can be linked or compiled into a closed-source commercial product, unlike FFmpeg, so I need to understand the format of the input file I'm passing in.
So, what does rawvideo mean to FFmpeg?
Is FFmpeg determining what type of raw format the input file has, or does rawvideo denote something distinct?
What does rgb32 mean here?
The size of the input file is a little more than (width * height * 8) bytes.
Normally a video file contains a video stream (whose format is specified using -vcodec), embedded in a media container (e.g. mp4, mkv, wav, etc.). The -f option is used to specify the container format. -f rawvideo is basically a dummy setting that tells ffmpeg that your video is not in any container.
-vcodec rawvideo means that the video data within the container is not compressed. However, there are many ways uncompressed video could be stored, so it is necessary to specify the -pix_fmt option. In your case, -pix_fmt rgb32 says that each pixel in your raw video data uses 32 bits (1 byte each for red, green, and blue, and the remaining byte ignored).
For more information on what the options mean, see the ffmpeg documentation.

Resources