Using OpenCV 2.4.4 with FFmpeg in Windows - windows

I know there are other questions dealing with FFmpeg usage in OpenCV, but most of them appear to be outdated.
By opening up the makefiles in CMake, I can verify that I've got the WITH_FFMPEG flag on. My output folder for the OpenCV build contains a bin folder, within which are Debug and Release folders, each containing a copy of a .dll file entitled opencv_ffmpeg244.dll. I can step into the source code of OpenCV when I create a VideoWriter and verify that the function pointers to the .dll get filled correctly. That much appears to be working.
If I use the FOURCC code of CV_FOURCC_PROMPT, the following codecs work properly:
Microsoft Video 1
Intel IYUV codec
Logitech Video (I420)
Cinepak Codec by Radius
Full Frames (Uncompressed)
The following codecs do not work properly (ie. produce a 0kb video file):
Microsoft RLE
If my understanding is correct, using FFMPEG should allow for encoding video using a whole bunch of new codecs (x264, DIVX, XVID, and so on). However, none of these appear in the prompt. Manually setting them by their FOURCC codes using the macro CV_FOURCC(...) also doesn't work. For instance, using this: CV_FOURCC('X','2','6','4') produces the message:
Could not find encoder for codec id 28: Encoder not found
and makes a video file of size 0kb.
Using this: CV_FOURCC('X','V','I','D') produces no error message, and makes a video file of 6kb that will not play in Windows Media Player or VLC.
I tried manually downloaded the Xvid codec from Xvid.org. Once that was installed, it appeared under the VFW selection in the prompt, and the encoding worked properly. So it's close to a solution, but if I try to set the FOURCC code directly, it still fails as above! I have to pick it from the prompt every time. Isn't FFmpeg supposed to include a whole bunch of codecs? If so, why am I manually downloading the codec instead of using the one built into FFmpeg?
What am I missing here? Is there a way to check that FFMPEG is "enabled"? It seems like the only codecs available in the prompt are VFW codecs, not the FFMPEG ones. The .dll has been built and is sitting in the same folder as the executable, but it appears it's not being used in any way.
Lots of related questions here. Hoping to find somebody knowledgeable about the FFmpeg implementation in OpenCV and with some knowledge of how all of these pieces fit together.

how about running ffmpeg and your application separately and transfer images using piped data?
to get video into opencv program,
ffmpeg -i input.mp4 -vcodec mjpeg -f image2pipe -pix_fmt yuvj420p -r 10 -|program.exe
and for recording etc
program.exe|ffmpeg -r 10 -vcodec mjpeg -f image2pipe -i - -vcodec h264 output.mp4
program.exe should be capable of reading concatenated jpeg images from stdin and writing the same to stdout and the above workflow will work. here's some code to read from stdin and display the video.
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
using namespace cv;
#if defined(_MSC_VER) || defined(WIN32) || defined(_WIN32) || defined(__WIN32__) \
|| defined(WIN64) || defined(_WIN64) || defined(__WIN64__)
# include <io.h>
# include <fcntl.h>
# define SET_BINARY_MODE(handle) setmode(handle, O_BINARY)
#else
# define SET_BINARY_MODE(handle) ((void)0)
#endif
#define BUFSIZE 10240
int main ( int argc, char **argv )
{
SET_BINARY_MODE(fileno(stdin));
std::vector<char> data;
bool skip=true;
bool imgready=false;
bool ff=false;
int readbytes=-1;
while (1)
{
char ca[BUFSIZE];
uchar c;
if (readbytes!=0)
{
readbytes=read(fileno(stdin),ca,BUFSIZE);
for(int i=0;i<readbytes;i++)
{
c=ca[i];
if(ff && c==(uchar)0xd8)
{
skip=false;
data.push_back((uchar)0xff);
}
if(ff && c==0xd9)
{
imgready=true;
data.push_back((uchar)0xd9);
skip=true;
}
ff=c==0xff;
if(!skip)
{
data.push_back(c);
}
if(imgready)
{
if(data.size()!=0)
{
cv::Mat data_mat(data);
cv::Mat frame(imdecode(data_mat,1));
imshow("frame",frame);
waitKey(1);
}else
{
printf("warning");
}
imgready=false;
skip=true;
data.clear();
}
}
}
else
{
throw std::string("zero byte read");
}
}
}
to write to output something like this should work.
void saveFramestdout(cv::Mat& frame,int compression)
{
SET_BINARY_MODE(fileno(stdout));
cv::Mat towrite;
if(frame.type()==CV_8UC1)
{
cvtColor(frame,towrite,CV_GRAY2BGR);
}else if(frame.type()==CV_32FC3)
{
double minVal, maxVal;
minMaxLoc(frame, &minVal, &maxVal);
frame.convertTo(towrite, CV_8U, 255.0/(maxVal - minVal), -minVal * 255.0/(maxVal - minVal));
}
else{
towrite=frame;
}
std::vector<uchar> buffer;
std::vector<int> param(2);
param[0]=CV_IMWRITE_JPEG_QUALITY;
param[1]=compression;//default(95) 0-100
imencode(".jpg",towrite,buffer,param);
uchar* a = &buffer[0];
::write(fileno(stdout),a,buffer.size());
}
the problem with the above is the multiple encode/decode of jpegs, which can be partially solved by linking with libjpeg-turbo. or one could go and figure out how to directly pass raw data from ffmpeg and opencv. for my case, this is quite acceptable as most of the overhead is in encoding or the video processing.

Part Way There
I used CV_FOURCC('x','2','6','4') and although it selected the correct codec, my defaults were wrong and I wasn't sure how to set them properly.
[libx264 # 0x7f9f0b022600] broken ffmpeg default settings detected
relevant c++ source
VideoCapture cap(argv[1]); // open file
if(!cap.isOpened()) { // check if we succeeded
cout << "failed to open file " << argv[1] << endl;
return -1;
}
Mat raw;
VideoWriter out(argv[2], CV_FOURCC('x','2','6','4'), fpsOut, size, true );
for(;;) {
bool bSuccess = cap.read(raw); // get a new frame from camera
if (!bSuccess) {
cout << "finished reading file" << endl;
break;
}
out << raw;
}

Your setup fails because ffmpeg creates a XVID codec for you but fails to operate it. Hence you get invalid files.
Luckily XVID codec is registered in VfW so you can simply remove opencv_ffmpeg*.dll library to disable it. Once ffmpeg is not used, your fourcc "xvid" will trigger the correct codec through VfW mechanism.

Related

How to Output Mjpeg from Kokorin Jaffree FFmpeg via UDP to a Localhost Port?

I have a Java program which displays dual webcams and records them to file in FHD 30fps H264/H265. It uses Sarxos Webcam for the initial setup and display but when recording, it switches to Jaffree FFmpeg. During recording Sarxos Webcam must release its webcam access and cannot display while recording continues.
I have tried recording with Xuggler/Sarxos but Sarxos seems to only access raw video from the webcams which creates limitations in the frame rate and resolution which can be achieved. At 1920x1080 the cameras can only deliver 5 fps raw video.
I am trying to direct mjpeg streams from Jaffree to localports for display purposes during recording but I cannot figure out how to do it.
Simultaneous recording plus sending to a port can be done from the terminal with the following:
ffmpeg -f dshow -video_size 1920x1080 -rtbufsize 944640k -framerate 25 -vcodec mjpeg -i video="Logitech Webcam C930e" -pix_fmt yuv420p -c:v libx264 outFHDx25.mp4 -f mpegts udp://localhost:1234?pkt_size=188&buffer_size=65535
and viewed from the port in a different terminal like this:
ffplay -i udp://localhost:1234
The video which displays is a little blocky compared with the video recorded to file. Any suggestions on how to improve this would be appreciated.
Note that FFPlay is not included in Jaffree FFMpeg.
I would like to send the mjpeg to a port and then read it into the Sarxos Webcam viewer to display while recording is in progress.
The Jaffree Java code for recording the output of one webcam to file follows. It takes the mjpeg/yuv422p output from the webcam and normally encodes it to file as H264/yuv420p:
public static FFmpeg createTestFFmpeg() {
String camera1Ref = "video=" + cam1Vid + ":audio=" + cam1Aud;
return FFmpeg.atPath()
.addArguments("-f", "dshow") //selects dshow for Windows
.addArguments("-video_size", resString) //video resolution eg 1920x1080
.addArguments("-rtbufsize", rtBufResultString)
.addArguments("-thread_queue_size", threadQ)
.addArguments("-framerate", fpsString) // capture frame rate eg 30fps
.addArguments(codec, vidString) //set capture encode mode from camera
.addArgument(audio) //on or off
.addArguments("-i", camera1Ref) // name of camera to capture
.addArguments("-pix_fmt", pixFmt)
.addArguments("-c:v", enc2) //eg enc2 = "libx264", "h264_nvenc"
.addArguments(enc3, enc4) //enc3 = "-crf", enc4 = "20"
.addArguments(enc5, enc6) //enc5 = "-gpu:v", enc6 = "0"
.addArguments(enc7, enc8) //enc7 = "-cq:v", enc8 = "20"
.addArguments(enc9, enc10) //enc9 = "-rc:v", enc10 = "vbr"
.addArguments(enc11, enc12) //enc11 = "-tune:v", enc12 = "ll"
.addArguments(enc13, enc14) //enc13 = "-preset:v", enc14 = "p1"
.addArguments(enc15,enc16) //enc15 = "-b:v", enc16 = "0"
.addArguments(enc17, enc18) //enc17 = "-maxrate:v", enc18 = "5000k"
.addArguments(enc19, enc20) //enc19 = "-bufsize:v", enc20 = "5000k"
.addArguments(enc21, enc22) //enc21 = "-profile:v", enc22 = "main"
.addArgument(noFFStats) //"-nostats"{, stops logging progress/statistics
.addArguments("-loglevel", ffLogLevel) //error logging
.addArgument(bannerResultString) // "-hide_banner"
.addArguments("-rtbufsize", rtBufResultString)
.setOverwriteOutput(true) // overwrite filename if it exists Boolean = overwriteFile
.addOutput(
UrlOutput
.toUrl(filePathL))
.setProgressListener(new ProgressListener(){
#Override
public void onProgress(FFmpegProgress progress){
if(ffProgress){
System.out.println(progress);
}
}
} );
}
How and where do I add the code to output mjpeg via UDP to a localport while simultaneously writing H264 to a file, and what is the syntax? I am sure it must be simple but I seem to have tried all of the permutations without success. I can write to a file OR I can output to a port but I cannot do both.
The following code works. FHD H264 30fps is recorded to file and 5fps mjpeg is output to localhost port 1234 (30 fps works but uses much more cpu).
public static FFmpeg createTestFFmpeg() {
String camera1Ref = "video=" + cam1Vid + ":audio=" + cam1Aud;
String fileName = "udp://localhost:1234?pkt_size=1316&buffer_size=65535";
return FFmpeg.atPath()
.addArguments("-f", "dshow")
//input parameters
.addArguments("-video_size", resString)
.addArguments("-rtbufsize", rtBufResultString)
.addArguments("-thread_queue_size", threadQ)
.addArguments("-framerate", fpsString)
.addArguments(codec, vidString)
.addArgument(audio)
.addArguments("-i", camera1Ref)
//output parameters
.addArguments("-pix_fmt", pixFmt)
.addArguments("-c:v", enc2)
.addArguments(enc3, enc4)
.addArguments(enc5, enc6) //enc5/6
.addArguments(enc7, enc8) //enc7/8
.addArguments(enc9, enc10) //enc9/10
.addArguments(enc11, enc12) //enc11/12
.addArguments(enc13, enc14) //enc13/14
.addArguments(enc15,enc16) //enc15/16
.addArguments(enc17, enc18) //enc17/18
.addArguments(enc19, enc20) //enc19/20
.addArguments(enc21, enc22) //enc21/22
.addArgument(noFFStats)
.addArguments("-loglevel", ffLogLevel)
.addArgument(bannerResultString)
.addArguments("-rtbufsize", rtBufResultString) // remove this?
.setOverwriteOutput(true)
//output to file
.addArgument(filePathL) //using .addOutput here stops following lines from executing
// output to port
.addArguments("-r", "5") //frame rate = 5 cpu ~15%, fps=30 gives cpu ~50%
//.addArguments("-c:v", "libx264") //cpu ~50% but VG pic Q else blocky
//.addArguments("-c", "copy") //doesn't work
.addArguments("-f", "mpegts")
.addArgument(fileName)
.setProgressListener(new ProgressListener(){
#Override
public void onProgress(FFmpegProgress progress){
if(ffProgress){
System.out.println(progress);
}
}
});
}
Output to port can be viewed with FFPlay as described previously or with VLC ->media->open network stream->enter network URL = udp://#:1234

FFMEG libavcodec decoder then re-encode video issue

I'm trying to use libavcodec library in FFMpeg to decode then re-encode a h264 video.
I have the decoding part working (rendes to an SDL window fine) but when I try to re-encode the frames I get bad data in the re-encoded videos samples.
Here is a cut down code snippet of my encode logic.
EncodeResponse H264Codec::EncodeFrame(AVFrame* pFrame, StreamCodecContainer* pStreamCodecContainer, AVPacket* pPacket)
{
int result = 0;
result = avcodec_send_frame(pStreamCodecContainer->pEncodingCodecContext, pFrame);
if(result < 0)
{
return EncodeResponse::Fail;
}
while (result >= 0)
{
result = avcodec_receive_packet(pStreamCodecContainer->pEncodingCodecContext, pPacket);
// If the encoder needs more frames to create a packed then return and wait for
// method to be called again upon a new frame been present.
// Else check if we have failed to encode for some reason.
// Else a packet has successfully been returned, then write it to the file.
if (result == AVERROR(EAGAIN) || result == AVERROR_EOF)
{
// Higher level logic, dedcodes next frame from source
// video then calls this method again.
return EncodeResponse::SendNextFrame;
}
else if (result < 0)
{
return EncodeResponse::Fail;
}
else
{
// Prepare packet for muxing.
if (pStreamCodecContainer->codecType == AVMEDIA_TYPE_VIDEO)
{
av_packet_rescale_ts(m_pPacket, pStreamCodecContainer->pEncodingCodecContext->time_base,
m_pDecodingFormatContext->streams[pStreamCodecContainer->streamIndex]->time_base);
}
m_pPacket->stream_index = pStreamCodecContainer->streamIndex;
int result = av_interleaved_write_frame(m_pEncodingFormatContext, m_pPacket);
av_packet_unref(m_pPacket);
}
}
return EncodeResponse::EncoderEndOfFile;
}
Strange behaviour I notice is that before I get the first packet from avcodec_receive_packet I have to send 50+ frames to avcodec_send_frame.
I built a debug build of FFMpeg and stepping into the code I notice that AVERROR(EAGAIN) is returned by avcodec_receive_packet because of the following in x264encoder::encode in encoder.c
if( h->frames.i_input <= h->frames.i_delay + 1 - h->i_thread_frames )
{
/* Nothing yet to encode, waiting for filling of buffers */
pic_out->i_type = X264_TYPE_AUTO;
return 0;
}
For some reason my code-context (h) never has any frames. I have spent a long time trying to debug ffmpeg and to determine what I'm doing wrong. But have reached the limit of my video codec knowledge (which is little).
I'm testing this with a video that has no audio to reduce complication.
I have created a cut down version of my application and provided a self contained (with ffmpeg and SDL built dependencies) project. Hopefully this can help anyone-one willing to help me :).
Project Link
https://github.com/maxhap/video-codec
After looking into encoder initialisation I found that I have to set the codec AV_CODEC_FLAG_GLOBAL_HEADER before calling avcodec_open2
pStreamCodecContainer->pEncodingCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
This change led to the re-encoded moov box looking much heathier (used MP4Box.js to parse it). However, the video still does not play correctly, the output video has grey frames at the start when played in VLC and won't play in other players.
I have since tried creating an encoding context via the sample code, rather than using my decoding codec parameters. This led to fixing the bad/data or encoding issue. However, my DTS times are scaling to huge numbers
Here is my new codec init
if (pStreamCodecContainer->codecType == AVMEDIA_TYPE_VIDEO)
{
pStreamCodecContainer->pEncodingCodecContext->height = pStreamCodecContainer->pDecodingCodecContext->height;
pStreamCodecContainer->pEncodingCodecContext->width = pStreamCodecContainer->pDecodingCodecContext->width;
pStreamCodecContainer->pEncodingCodecContext->sample_aspect_ratio = pStreamCodecContainer->pDecodingCodecContext->sample_aspect_ratio;
/* take first format from list of supported formats */
if (pStreamCodecContainer->pEncodingCodec->pix_fmts)
{
pStreamCodecContainer->pEncodingCodecContext->pix_fmt = pStreamCodecContainer->pEncodingCodec->pix_fmts[0];
}
else
{
pStreamCodecContainer->pEncodingCodecContext->pix_fmt = pStreamCodecContainer->pDecodingCodecContext->pix_fmt;
}
/* video time_base can be set to whatever is handy and supported by encoder */
pStreamCodecContainer->pEncodingCodecContext->time_base = av_inv_q(pStreamCodecContainer->pDecodingCodecContext->framerate);
pStreamCodecContainer->pEncodingCodecContext->sample_aspect_ratio = pStreamCodecContainer->pDecodingCodecContext->sample_aspect_ratio;
}
else
{
pStreamCodecContainer->pEncodingCodecContext->channel_layout = pStreamCodecContainer->pDecodingCodecContext->channel_layout;
pStreamCodecContainer->pEncodingCodecContext->channels =
av_get_channel_layout_nb_channels(pStreamCodecContainer->pEncodingCodecContext->channel_layout);
/* take first format from list of supported formats */
pStreamCodecContainer->pEncodingCodecContext->sample_fmt = pStreamCodecContainer->pEncodingCodec->sample_fmts[0];
pStreamCodecContainer->pEncodingCodecContext->time_base = AVRational{ 1, pStreamCodecContainer->pEncodingCodecContext->sample_rate };
}
Any ideas why my DTS time is re-scaling incorrectly?
I managed to fix the DTS scalling by using the time_base value directly from the decoding streams.
So
pStreamCodecContainer->pEncodingCodecContext->time_base = m_pDecodingFormatContext->streams[pStreamCodecContainer->streamIndex]->time_base
Instead of
pStreamCodecContainer->pEncodingCodecContext->time_base = av_inv_q(pStreamCodecContainer->pDecodingCodecContext->framerate);
I will create an answer based on all my finding.
To fix the initial problem of a corrupted moov box I had to add the AV_CODEC_FLAG_GLOBAL_HEADER flag to the encoding codec context before calling avcodec_open2.
encCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;
The next issue was badly scaled DTS values in the encoded package, this was causing a side effect of the final mp4 duration being in the hundreds of hours long. To fix this I had to change the encoding codec context timebase to be that of the decoding context streams timebase. This is different than using av_inv_q(framerate) as suggested in the avcodec transcoding example.
encCodecContext->time_base = decCodecFormatContext->streams[streamIndex]->time_base;

raw h.264 bitstream decoding

I can get raw h.264 frames from a camera. (it does NOT contain any network headers, for example rtsp, http).
They are h.264 raw data.
And I push these data to a queue frame by frame.
I googled many ffmpeg example which uses avformat_open_input() with either local file path or network path.
And I can see the video while I save the frames to a file and using avformat_open_input().
My problem is that I want to decode the frames realtime, not after it is saved as a file.
Does anyone have any idea on this?
Thanks!
You do not need avformat, you need avcodec. avformat is for parsing containers and protocols. avcodec is for encoding and decoding elementary streams (what you already have).
AVPacket avpkt; int err, frame_decoded = 0;
AVCodec *codec = avcodec_find_decoder ( AV_CODEC_ID_H264 );
AVCodecContext *codecCtx = avcodec_alloc_context3 ( codec );
avcodec_open2 ( codecCtx, codec, NULL );
// Set avpkt data and size here
err = avcodec_decode_video2 ( codecCtx, avframe, &frame_decoded, &avpkt );

avcodec_open only works with uncompressed format

Context: I have a file called libffmpeg.so, that I took from the APK of an Android application that is using FFMPEG to encode and decode files between several Codecs. Thus, I take for grant that this is compiled with encoding options enable and that this .so file is containing all the codecs somewhere. This file is compiled for ARM (what we call ARMEABI profile on Android).
I also have a very complete class with interops to call API from ffmpeg. Whatever is the origin of this static library, all call responses are good and most endpoints exist. If not I add them or fix deprecated one.
When I want to create an ffmpeg Encoder, the returned encoder is correct.
var thisIsSuccessful = avcodec_find_encoder(myAVCodec.id);
Now, I have a problem with Codecs. The problem is that - let's say that out of curiosity - I iterate through the list of all the codecs to see which one I'm able to open with the avcodec_open call ...
AVCodec codec;
var res = FFmpeg.av_codec_next(&codec);
while((res = FFmpeg.av_codec_next(res)) != null)
{
var name = res->longname;
AVCodec* encoder = FFmpeg.avcodec_find_encoder(res->id);
if (encoder != null) {
AVCodecContext c = new AVCodecContext ();
/* put sample parameters */
c.bit_rate = 64000;
c.sample_rate = 22050;
c.channels = 1;
if (FFmpeg.avcodec_open (ref c, encoder) >= 0) {
System.Diagnostics.Debug.WriteLine ("[YES] - " + name);
}
} else {
System.Diagnostics.Debug.WriteLine ("[NO ] - " + name);
}
}
... then only uncompressed codecs are working. (YUV, FFmpeg Video 1, etc)
My hypothesis are these one:
An option that was missing at the time of compiling to the .so file
The av_open_codec calls is acting depending on the properties of the AVCodecContext I've referenced in the call.
I'm really curious about why only a minimum set of uncompressed codecs are returned?
[EDIT]
#ronald-s-bultje answer led me to read AVCodecContext API description, and there are a lot of mendatory fileds with "MUST be set by user" when used on an encoder. Setting a value for these parameters on AVCodecContext made most of the nice codecs available:
c.time_base = new AVRational (); // Output framerate. Here, 30fps
c.time_base.num = 1;
c.time_base.den = 30;
c.me_method = 1; // Motion-estimation mode on compression -> 1 is none
c.width = 640; // Source width
c.height = 480; // Source height
c.gop_size = 30; // Used by h264. Just here for test purposes.
c.bit_rate = c.width * c.height * 4; // Randomly set to that...
c.pix_fmt = FFmpegSharp.Interop.Util.PixelFormat.PIX_FMT_YUV420P; // Source pixel format
The av_open_codec calls is acting depending on the properties of the
AVCodecContext I've referenced in the call.
It's basically that. I mean, for the video encoders, you didn't even set width/height, so most encoders really can't be expected to do anything useful like this, and are right to error right out.
You can set default parameters using e.g. avcodec_get_context_defaults3(), which should help you a long way to getting some useful settings in the AVCodecContext. After that, set typical ones like width/height/pix_fmt to the ones describing your input format (if you want to do audio encoding - which is actually surprisingly unclear from your question, you'll need to set some different ones like sample_fmt/sample_rate/channels, but same idea). And then you should be relatively good to go.

How to use OpenCV VideoWriter on Mac

I have the following code which writes a few frames to a .avi video file. This works perfectly fine on a windows machine but when I try it on my Mac it creates the .avi file and displays no errors, but the file will not play. I haven't been able to find a clear solution so far.
I am currently using Mac OSX 10.9.2.
void videoWriter()
{
CvVideoWriter *writer;
writer = cvCreateVideoWriter("test.avi",CV_FOURCC('I','Y','U','V'),1,Size(640,480),1);
for(int i = 0; i < 9; i++)
{
if(imMan.returnSelect(i)) {
cout << "Frame " << i << endl;
/****** Original Image *********/
Mat frame = imMan.returnOrg(i);
IplImage fr = frame;
cvWriteFrame(writer,&fr);
}
}
cvReleaseVideoWriter(&writer);
}
what is the size of frame?
In my experience, cvWriteFrame will not generate error even if the 4th parameter of cvCreateVideoWriter does not match with the dimension of your image frame. And it write something like an header. (with 414 byte...)
make sure they match exactly.
You have to use the right combination of codec and extension.
The codec is platform dependent. That could be the problem.
Try using this combination:
writer = cvCreateVideoWriter("test.mkv",CV_FOURCC(*'X264),1,Size(640,480),1);
Here is the reference link

Resources