Execution Error using OpenCV for video Capturing - visual-studio-2010

I am facing the following problem when I execute the programme. The program crashed abnormally at line no 1 of the main function. the path of the is all correct, nothing is wrong there. All set-up of the lib-include additional libraries are done perfectly. What could be wrong here?
#include <cv.h>
#include <highgui.h>
int main()
{
//load the video file to the memory
** CvCapture *capture = cvCaptureFromAVI("A.avi"); ** // this instruction crashed the file is there in the folder, that not an issue....
if( !capture ) return 1;
//obtain the frames per seconds of that video
int fps = ( int )cvGetCaptureProperty( capture, CV_CAP_PROP_FPS );
//create a window with the title "Video"
cvNamedWindow("Video");
while(true) {
//grab and retrieve each frames of the video sequencially
IplImage* frame = cvQueryFrame( capture );
if( !frame ) break;
//show the retrieved frame in the "Video" window
cvShowImage( "Video", frame );
int c;
if(fps!=0){
//wait for 1000/fps milliseconds
c = cvWaitKey(1000/fps);
}else{
//wait for 40 milliseconds
c = cvWaitKey(40);
}
//exit the loop if user press "Esc" key (ASCII value of "Esc" is 27)
if((char)c==27 ) break;
}
//destroy the opened window
cvDestroyWindow("Video");
//release memory
cvReleaseCapture( &capture );
return 0;
}

As a sanity check, let's see if the VideoCapture stuff from the OpenCV C++ API works. Let's give this a try:
#include <opencv2/opencv.hpp>
#include <cv.h>
using namespace cv;
VideoCapture capture("A.avi");
Mat matFrame;
capture >> matFrame; //each time you do this, the next video frame goes into the 'matFrame' object
IplImage* frame=cvCloneImage(&(IplImage)matFrame); //convert Mat to IplImage to hook in with your code
I'm mixing the C and C++ OpenCV APIs together here, but I just wanted to help you get it running.

Related

FFMPEG- H.264 encoding BGR image data to YUP420P video file resulting in empty video

I'm new to FFMPEG and trying to use it to do some screen capture to a video file, but after a lot of online searching I am stumped as to what I'm doing wrong. Basically, I've already done the effort of capturing screen data via DirectX which stores in a BGR pixel format and I'm just trying to put each frame in a video file. There's two functions, setup which does all the ffmpeg initialization work, and addImage which is called in the main program loop and puts each buffer of BGR image data into a video file. The technique I'm doing for this is to make two frames, one with the BGR data and one with YUP420P (doesn't need to be the latter but after a lot of trial and error it was all I was able to get working with H.264), and use sws_scale to copy data between the two, and then send that frame to video.mp4. The file seems to be having data written to it successfully (the file size grows and grows as the program runs), but when I try and view it in VLC I see nothing- indeed, VLC fails to fetch a length of the video, and bringing up codec and media information both are empty. I turned on ffmpeg verbose logging but all that is spit out is the following:
Setting default whitelist 'Epu��'
Timestamps are unset in a packet for stream -1259342440. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly
Encoder did not produce proper pts, making some up.
From what I am reading, I understand this to be warnings rather than errors that would totally corrupt my video file. I separately went through all the error codes being spit out and everything seems nominal to me (zero for success for most calls, -11 sometimes for avcodec_receive_packet but the docs indicate that's expected sometimes).
Based on my understanding of things as they are, this should be working, but isn't, and the logs and error codes give me nothing to go on, so someone with experience with this I reckon would save me a ton of time. The code is as follows:
VideoService.h
#ifndef VIDEO_SERVICE_H
#define VIDEO_SERVICE_H
extern "C" {
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavutil/imgutils.h>
#include <libswscale/swscale.h>
}
class VideoService {
public:
void setup();
void addImage(unsigned char* data, int lineSize, int width, int height, int align);
private:
AVCodecContext* context;
AVFormatContext* formatContext;
AVFrame* bgrFrame;
AVFrame* yuvFrame;
AVStream* videoStream;
SwsContext* swsContext;
};
#endif
VideoService.cpp
#include "VideoService.h"
#include <stdio.h>
void FfmpegLogCallback(void *ptr, int level, const char *fmt, va_list vargs)
{
FILE* f = fopen("ffmpeg.txt", "a");
fprintf(f, fmt, vargs);
fclose(f);
}
void VideoService::setup() {
int result = 0;
av_log_set_level(AV_LOG_VERBOSE);
av_log_set_callback(FfmpegLogCallback);
bgrFrame = av_frame_alloc();
bgrFrame->width = 1920;
bgrFrame->height = 1080;
bgrFrame->format = AV_PIX_FMT_BGRA;
bgrFrame->time_base.num = 1;
bgrFrame->time_base.den = 60;
result = av_frame_get_buffer(bgrFrame, 1);
yuvFrame = av_frame_alloc();
yuvFrame->width = 1920;
yuvFrame->height = 1080;
yuvFrame->format = AV_PIX_FMT_YUV420P;
yuvFrame->time_base.num = 1;
yuvFrame->time_base.den = 60;
result = av_frame_get_buffer(yuvFrame, 1);
const AVOutputFormat* outputFormat = av_guess_format("mp4", "video.mp4", "video/mp4");
result = avformat_alloc_output_context2(
&formatContext,
outputFormat,
"mp4",
"video.mp4"
);
formatContext->oformat = outputFormat;
const AVCodec* codec = avcodec_find_encoder(AVCodecID::AV_CODEC_ID_H264);
result = avio_open2(&formatContext->pb, "video.mp4", AVIO_FLAG_WRITE, NULL, NULL);
videoStream = avformat_new_stream(formatContext, codec);
AVCodecParameters* codecParameters = videoStream->codecpar;
codecParameters->codec_type = AVMediaType::AVMEDIA_TYPE_VIDEO;
codecParameters->codec_id = AVCodecID::AV_CODEC_ID_HEVC;
codecParameters->width = 1920;
codecParameters->height = 1080;
codecParameters->format = AVPixelFormat::AV_PIX_FMT_YUV420P;
videoStream->time_base.num = 1;
videoStream->time_base.den = 60;
result = avformat_write_header(formatContext, NULL);
codec = avcodec_find_encoder(videoStream->codecpar->codec_id);
context = avcodec_alloc_context3(codec);
context->time_base.num = 1;
context->time_base.den = 60;
avcodec_parameters_to_context(context, videoStream->codecpar);
result = avcodec_open2(context, codec, nullptr);
swsContext = sws_getContext(1920, 1080, AV_PIX_FMT_BGRA, 1920, 1080, AV_PIX_FMT_YUV420P, 0, 0, 0, 0);
}
void VideoService::addImage(unsigned char* data, int lineSize, int width, int height, int align) {
int result = 0;
result = av_image_fill_arrays(bgrFrame->data, bgrFrame->linesize, data, AV_PIX_FMT_BGRA, 1920, 1080, 1);
sws_scale(swsContext, bgrFrame->data, bgrFrame->linesize, 0, 1080, &yuvFrame->data[0], yuvFrame->linesize);
result = avcodec_send_frame(context, yuvFrame);
AVPacket *packet = av_packet_alloc();
result = avcodec_receive_packet(context, packet);
if (result != 0) {
return;
}
result = av_interleaved_write_frame(formatContext, packet);
}
My environment is windows 10, I'm building with clang++ 12.0.1, and using the FFMPEG 5.1 libs.
See the official sample, muxing.c.
Fix you code like the following.
Set fields of an AVCodecContext and call the avcodec_parameters_from_context(), instead of calling the avcodec_parameters_to_context(). You should set width, height, bit_rate, pix_fmt, framerate and time_base at least.(See the implementation of the add_stream() in the sample.)
Specify an algorithm such as the SWS_BILINEAR when calling the sws_getContext().(Although a default algorithm will be selected, that's an undocumented feature.)
Set the pts(presentation timestamp) field of an AVFrame.
Implement a loop calling the avcodec_receive_packet() after calling the avcodec_send_frame(). See the write_frame() in the sample.(Single frame can result in multiple packets.)

How to find the dpi of a monitor on which a specific window is placed in linux?

I want to change the font size when my application window moves from one monitor to another depending on the underlying dpi of destination monitor.
I played with xrandr, xdpyinfo and xlib. I looked at the source code but I couldn't find a way to associate the monitor on which the window (window id) is placed.
Qt has QDesktopWidget, which provides physicalDpiX/Y but only (so it seems) for the primary monitor.
xrandr.h contains XRROutputInfo which delivers mm_width and mm_height, but how can I make the connection to a window id?
Since this question got some attention, I want so share my research. I haven not found a perfect solution. It looks like it's not possible.
But playing with the following code snip will probably help you. The idea is to calculate the underlying display by comparing the window position. If the position is larger then the first screen's resolution, and must be the 2nd monitor. Pretty straight forward.
#include <X11/Xlib.h>
#include <X11/extensions/Xrandr.h>
#include <stdio.h>
#include <stdlib.h>
// compile: g++ screen_dimension.cpp -lX11 -lXrandr
int main()
{
int wid = atoi( getenv( "WINDOWID" ) );
printf("window id: %i\n", wid);
Display * dpy = XOpenDisplay(NULL);
int screen = DefaultScreen(dpy);
Window root = DefaultRootWindow(dpy);
XRRScreenResources * res = XRRGetScreenResourcesCurrent(dpy, root);
XRROutputInfo * output_info;
for (int i = 0; i < res->noutput; i++)
{
output_info = XRRGetOutputInfo (dpy, res, res->outputs[i]);
if( output_info->connection ) continue; // No connection no crtcs
printf(" (%lu %lu) mm Name: %s connection: %i ncrtc: %i \n", output_info->mm_width
, output_info->mm_height
, output_info->name
, output_info->connection
, output_info->ncrtc
);
}
printf("crtcs:\n");
for( int j = 0; j < output_info->ncrtc; j++ ) {
XRRCrtcInfo * crtc_info = XRRGetCrtcInfo( dpy, res, res->crtcs[ j ] );
if( not crtc_info->noutput ) continue;
printf("%i w: %5i h: %5i x: %5i y: %i\n", j
, crtc_info->width
, crtc_info->height
, crtc_info->x
, crtc_info->y
);
}
}
There are actually 2 functions to query resources about the screens:
XRRGetScreenResourcesCurrent and XRRGetScreenResources. The first one returns some cached value, while the latter one asks the server which may introduce polling. The description (search for RRGetScreenResources):
https://www.x.org/releases/X11R7.6/doc/randrproto/randrproto.txt
Someone went through the trouble timing it:
https://github.com/glfw/glfw/issues/347
XRRGetScreenResourcesCurrent: Tipically from 20 to 100 us. h
XRRGetScreenResources: Typically from 13600 to 13700 us.
Ok, since there is no further discussion here and I am convinced my little program (see above) works, I declare it now as: Answered!
Compile instructions are
g++ screen_dimension.cpp -lX11 -lXrandr
(also added as comment above)
Why so complicated ?! Just get the info from screen where your window is attached.
double dDisplayDPI_H,dDisplayDPI_V;
dDisplayDPI_H = ((double)DisplayWidth( dpy, scr ))/(((double)DisplayWidthMM( dpy, scr ))/25.4);
dDisplayDPI_V = ((double)DisplayHeight( dpy, scr ))/(((double)DisplayHeightMM( dpy, scr ))/25.4);

load then write image

I read image using opencv and save it again, but when I read it later the data not be the same, that I mean after I read the image I save it, then copy the saved image and read the data inside this image but the data will not the same as before, I write small code to do the following
1- read image
2- save the image
3- save image data into text file
4- read the saved image fro step 2
5- compare the values of the image to the values of the text file and print them together
my code is
#include<stdio.h>
#include<stdlib.h>
#include<math.h>
#include <time.h>
#include <stdint.h>
#include "highgui.h"
IplImage *PlainImage=0,*CipherImage=0,*DecPlainImage=0;
void func_printimage()
{
// create a window
cvNamedWindow("Plain Image",CV_WINDOW_AUTOSIZE);
cvMoveWindow("Plain Image", 800, 600);
// show the image
cvShowImage("Plain Image", PlainImage );
// wait for a key
cvNamedWindow("Cipher Image",CV_WINDOW_AUTOSIZE);
cvMoveWindow("Cipher Image", 800, 600);
// show the image
cvShowImage("Cipher Image", CipherImage );
cvSaveImage("CipherImage.jpg",CipherImage,0);
cvWaitKey(0);
}
int main()
{
//i j and k used as counters
int i,j,step,dep,k,ch,L,C,P,sum=0;
uchar *data_byte;
//Define CPU time parameters for each Layer
PlainImage=cvLoadImage("PlainImage.jpg",3);
CipherImage=cvLoadImage("PlainImage.jpg",3);
L = PlainImage->height;
C = PlainImage->width;
P = PlainImage->nChannels;
step = PlainImage->widthStep;
data_byte=CipherImage->imageData;
printf("Image Information are:\nL=%d\n",L);
printf("C=%d\n",C);
printf("P=%d\n",P);
system("pause");
FILE *f1;
f1 = fopen ("cipher1.txt", "wt");
fprintf(f1,"%d\t%d\t%d\t%d\t",L,C,P,CipherImage->depth);
for(k=0;k<L*C*P;k++)
{
fprintf(f1,"%d\t",data_byte[k]);
}
fclose (f1);
func_printimage();
for(k=0;k<L*C*P;k++)
{
data_byte[k]=0;
}
f1 = fopen ("cipher1.txt", "rt");
fscanf (f1,"%d", &L);
fscanf (f1,"%d", &C);
fscanf (f1,"%d", &P);
fscanf (f1,"%d", &dep);
CipherImage=cvLoadImage("CipherImage.jpg",3);
data_byte=CipherImage->imageData;
printf("Image Information are:\nL=%d\n",L);
printf("C=%d\n",C);
printf("P=%d\n",P);
system("pause");
for(k=0;k<L*C*P;k++)
{
fscanf (f1,"%d", &i);
sum+=abs(i-data_byte[k]);
printf("i=%d data=%d\n",i,data_byte[k]);
}
printf("difference=%d\n",sum);
fclose (f1);
system("pause");
return 0;
}
//End of the main Program
jpg images use Lossy Compression.
You should use png images.
Here in this post i will show you that how to Load an image from your chosen directory and then convert it into gray color, and then store the new one(modified) image in a directory C:\Images.
The code is given below:
#include <cv.h>
#include <highgui.h>,
using namespace cv;
int main( )
{
Mat img;
img = imread(“C:\\prado.jpg”, 1 );
if( !img.data )
{
printf( ” No image data \n ” );
return -1;
}
else
prinf(“Your program is working well”);
Mat gray_image;
cvtColor( img, gray_image, CV_RGB2GRAY );
imwrite( “C://images/Gray_Image.jpg”, gray_image);
imshow( “real image”, img);
imshow( “Gray image”, gray_image);
waitKey(0);
return 0;
}
EXPLANATION:
Mat img = imread(“C:\\prado.jpg”, 1 );
this means to get image from my directory and store it in Mat object which is “img” here, actually Mat object store the data of any image.
cvtColor( img, gray_image, CV_RGB2GRAY );
This line convert the originial (RGB) into other color image(GRAY)
imwrite( “C://images/Gray_Image.jpg”, gray_image);
This one store the new modified image which have been store in Mat object “gray_image” in the directory C://images/ you can chose your own directory

Not able to capture image Opencv

Not able to capture Image, works in other laptop, my webcam is not able to open.But it works in other laptop.The output is "Error:Capture is Null"
#include "cv.h"
#include "highgui.h"
#include <stdio.h>
// A Simple Camera Capture Framework
int main() {
CvCapture* capture = cvCaptureFromCAM( CV_CAP_ANY );
if ( !capture ) {
fprintf( stderr, "ERROR: capture is NULL \n" );
getchar();
return -1;
}
// Create a window in which the captured images will be presented
cvNamedWindow( "mywindow", CV_WINDOW_AUTOSIZE );
// Show the image captured from the camera in the window and repeat
while ( 1 ) {
// Get one frame
IplImage* frame = cvQueryFrame( capture );
if ( !frame ) {
fprintf( stderr, "ERROR: frame is null...\n" );
getchar();
break;
}
cvShowImage( "mywindow", frame );
// Do not release the frame!
//If ESC key pressed, Key=0x10001B under OpenCV 0.9.7(linux version),
//remove higher bits using AND operator
if ( (cvWaitKey(10) & 255) == 27 ) break;
}
// Release the capture device housekeeping
cvReleaseCapture( &capture );
cvDestroyWindow( "mywindow" );
return 0;
}
You may try different numbers in the place of CV_CAP_ANY.
It is also possible that your OpenCV is not installed appropriately, then you should reinstall it with libv4l as it is suggested here.
There is a slight chance that your camera is not compatible with OpenCV.
Try passing -1 instead of CV_CAP_ANY.

Using mouse to draw lines on video with OpenCV

I have been playing with OpenCV (I am pretty new to it) to display live camera. What I wanted to do next was to draw lines on it with my mouse. Does anyone know how to do this? So far, what I have is:
#include "stdafx.h"
#include <stdio.h>
#include "cv.h"
#include "highgui.h"
int main( int argc, char **argv )
{
CvCapture *capture = 0;
IplImage *frame = 0;
int key = 0;
/* initialize camera */
capture = cvCaptureFromCAM( 0 );
/* always check */
if ( !capture ) {
fprintf( stderr, "Cannot open initialize webcam!\n" );
return 1;
}
/* create a window for the video */
cvNamedWindow( "Testing", CV_WINDOW_AUTOSIZE );
while( key != 'q' ) {
/* get a frame */
frame = cvQueryFrame( capture );
/* always check */
if( !frame ) break;
/* display current frame */
cvShowImage( "result", frame );
/* exit if user press 'q' */
key = cvWaitKey( 1 );
}
/* free memory */
cvDestroyWindow( "result" );
cvReleaseCapture( &capture );
return 0;
}
If anyone could help me draw lines on the live video, or if anyone knows of any tips, I'd greatly appreciate it! Thanks!
If it helps, here is my code for drawing rectangles on multiple sized video streams
#include "stdafx.h"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <stdlib.h>
#include <stdio.h>
#include <iostream>
#define SSTR( x ) dynamic_cast< std::ostringstream & >(( std::ostringstream() << std::dec << x ) ).str()
using namespace std;
using namespace cv;
Rect box; //global structures needed for drawing
bool drawing_box = false;
struct mousecallbackstruct{
Mat* src;
Mat* overlay;
string windowname;
};
Mat srcoverlay,smallsrcoverlay; //an overlay must be created for each window you want to draw on
void onMouse(int event, int x, int y, int flags, void* param) //it seems the only way to use this is by keeping different globals for different windows - meaning you have to set up all thise ahead of time, and keep track of it and not mix/match windows/frames!! horrible design here.
{
cout << event;
mousecallbackstruct mousestruct;
mousestruct = *((mousecallbackstruct*)param);
Mat* srcp = mousestruct.src;
Mat* overlayp = mousestruct.overlay; // yeah, yeah, i use 7 lines where I could use 3, so sue me
Mat src = *srcp;
Mat overlay = *overlayp;
if(!src.data){
cout << "your void * cast didn't work :(\n";
return;
}
switch( event ){
case CV_EVENT_MOUSEMOVE:
if( drawing_box ){
box.width = x-box.x;
box.height = y-box.y;
}
break;
case CV_EVENT_LBUTTONDOWN: //start drawing
drawing_box = true;
box = cvRect( x, y, 0, 0 );
break;
case CV_EVENT_LBUTTONDBLCLK: //double click to clear
drawing_box = false;
overlay.setTo(cv::Scalar::all(0)); //clear it
break;
case CV_EVENT_LBUTTONUP: //draw what we created with Lbuttondown
drawing_box = false;
if( box.width < 0 ){
box.x += box.width;
box.width *= -1;
}
if( box.height < 0 ){
box.y += box.height;
box.height *= -1;
}
rectangle( overlay, Point(box.x, box.y), Point(box.x+box.width,box.y+box.height),CV_RGB(100,200,100),4); //draw rectangle. You can change this to line or circle or whatever. Maybe with the Right mouse button.
break;
}
}
void iimshow(mousecallbackstruct* mystructp){ //this is where we add the text/drawing created in the mouse handler to the actual image (since mouse handler events do not coincide with the drawing events)
mousecallbackstruct mystruct = *mystructp; //custom struct made for the mouse callback - very handy for other functions too
Mat overlay, src;
Mat* srcp = mystruct.src;
Mat* overlayp = mystruct.overlay;
src = *srcp; // yeah, yeah, i use 9 lines where I could use 3, so sue me
overlay = *overlayp;
string name = mystruct.windowname;
Mat added,imageROI;
try{
//cout << "tch:" << overlay.rows << "," << src.rows << ";" << overlay.cols << "," << src.cols << ";" << src.channels() << "," << overlay.channels() <<"," << src.type() << "," << overlay.type() << "\n";
if(overlay.data && overlay.rows == src.rows && overlay.cols == src.cols && overlay.channels() == src.channels()){ //basic error checking
add(src,overlay,added);
}else{
//try to resize it
imageROI= overlay(Rect(0,0,src.cols,src.rows));
add(src,imageROI,added);
}
imshow(name,added);// the actual draw moment
}catch(...){ //if resize didn't work then this should catch it and you can see what didn't match up
cout << "Error. Mismatch:" << overlay.rows << "," << src.rows << ";" << overlay.cols << "," << src.cols << ";" << src.channels() << "," << overlay.channels() <<"," << src.type() << "," << overlay.type() << "\n";
imshow(name + "overlay",overlay);
imshow(name+"source",src);
}
}
int _tmain(int argc, _TCHAR* argv[]){
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) { // check if we succeeded
cout << "NO camera found \n";
return -1;
}
Mat src,smallsrc,overlay;
cap >> src; //grab 1 frame to build our preliminary Mats and overlays
srcoverlay.create(src.rows,src.cols,src.type()); //create overlays
smallsrcoverlay.create(src.rows,src.cols,src.type());
srcoverlay.setTo(cv::Scalar::all(0)); //clear it
smallsrcoverlay.setTo(cv::Scalar::all(0)); //clear it
namedWindow( "smallsrc", CV_WINDOW_AUTOSIZE );
namedWindow( "source", CV_WINDOW_AUTOSIZE ); //these must be created early for the setmousecallback, AND you have to know what Mats will be using them and not switch them around :(
moveWindow("smallsrc",1000,100); //create a small original capture off to the side of screen
////////////// for each window/mat that uses a mouse handler, you must create one of these structures for it and pass it into the mouse handler, and add a global mat for overlays (at top of code)
mousecallbackstruct srcmousestruct,smallsrcmousestruct; //these get passed into the mouse callback function. Hopefully they update their contents automatically for the callback? :(
srcmousestruct.overlay = &srcoverlay; //fill our custom struct
srcmousestruct.src = &src;
srcmousestruct.windowname = "source";
smallsrcmousestruct.overlay = &smallsrcoverlay; //the small window
smallsrcmousestruct.src = &smallsrc;
smallsrcmousestruct.windowname = "smallsrc";
setMouseCallback(smallsrcmousestruct.windowname, onMouse, (void*)&smallsrcmousestruct); //the actual 'set mouse callback' call
setMouseCallback(srcmousestruct.windowname, onMouse, (void*)&srcmousestruct);
for(;;){ //main loop
/// Load an image
cap >> src;
if( !src.data )
{ return -1; }
resize(src,smallsrc,Size(),.5,.5); //smaller scale window of original
overlay = *srcmousestruct.overlay;
src = *srcmousestruct.src;
iimshow(&srcmousestruct); //my imshow replacement. uses structs
iimshow(&smallsrcmousestruct);
if(waitKey(30) == 27) cin.get(); //esc pauses
}
cin.get();
return 0;
}
You will have to be more clear as to what you mean by drawing on the video.
One option is to handle the mouse positions, by drawing the lines between them, on a black/blank "mask" image, and "apply" this image to each video frame before it is displayed.
To capture mouse events you need to create a callback. This callback will be tied to a specific named window. The documentation for the call cvSetMouseCallback is pretty good. The callback function will know current position and button click information. From there you can capture points on mouse clicks and use those points with cvLine to draw on your frame.

Resources