Preview pixel value for float-image - image

Say I have a floating point image, e.g. in 32FC1 format for a thermal image, and I want to display it using (preferably) ROS or openCV tools, while also being able to see the current pixel value (e.g. temperature) my mouse is hovering over. How would I do that? Rviz can display the image, but will not show any pixel values. Image_view is also able to display the image, but will show the pixel value in RGB.
Thank you!

#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
using std::cout;
using std::endl;
// create a global Mat
cv::Mat img_32FC1;
// function to be called on mouse event
// displays values on console, it can be modified to print values on image
void mouseEventCallBack (int event, int x, int y, int flags, void *userdata)
{
if(event == cv::EVENT_MOUSEMOVE)
{
cout<<"x = "<<x<<", y = "<<y<<" value = "<<img_32FC1.at<float>(y,x)<<endl;
}
}
int main()
{
// original color image, CV_8UC3
cv::Mat img_8UC3 = cv::imread("image.jpg",cv::IMREAD_UNCHANGED), img_8UC1;
// convert original image to gray, CV_8UC1
cv::cvtColor(img_8UC3, img_8UC1, cv::COLOR_BGR2GRAY);
// convert to float, CV_32FC1
img_8UC1.convertTo(img_32FC1, CV_32FC1);
img_32FC1 /= 255.0;
// create a window
cv::namedWindow("window",CV_WINDOW_AUTOSIZE);
// set MouseCallback function
cv::setMouseCallback("window", mouseEventCallBack);
// Display image
cv::imshow("window", img_8UC1);
cv::waitKey(0);
cv::destroyAllWindows();
return 0;
}

Related

Correct way to draw zoomable audio waveform

I am trying to implement smooth zoomable audio waveform but am puzzled with the correct approach to implement zoom. I searched internet but there is very little or no information.
So here is what I have done:
Read audio samples from file and compute waveform points with samplesPerPixel = 10, 20, 40, 80, ....,10240. Store the datapoints for each scale (11 in total here). Max and min are also stored along with points for each samplesPerPixel.
When zooming, switch to the closest dataset. So if samplesPerPixel at current width is 70, then use dataset corresponding to samplesPerPixel = 80. The correct dataset index is easily found using log2(samplesPerPixel).
Use subsampling of the dataset to draw waveform points. So if we samplesPerPixel = 41 and we are using data set for zoom 80, then we use the scaling factor 80/41 to subsample.
let scaleFactor = 80.0/41.0
x = waveformPointX[i*scaleFactor]
I am yet to find a better approach and not too sure if the above approach of subsampling is correct, but for sure this approach consumes lot of memory and also is slow to load data at the start. How do audio editors implement zooming in waveform, is there an efficient approach?
EDIT: Here is a code for computing mipmaps.
public class WaveformAudioSample {
var samplesPerPixel:Int = 0
var totalSamples:Int = 0
var samples: [CGFloat] = []
var sampleMax: CGFloat = 0
}
private func downSample(_ waveformSample:WaveformAudioSample, factor:Int) {
NSLog("Averaging samples")
var downSampledAudioSamples:WaveformAudioSample = WaveformAudioSample()
downSampledAudioSamples.samples = [CGFloat](repeating: 0, count: waveformSample.samples.count/factor)
downSampledAudioSamples.samplesPerPixel = waveformSample.samplesPerPixel * factor
downSampledAudioSamples.totalSamples = waveformSample.totalSamples
for i in 0..<waveformSample.samples.count/factor {
var total:CGFloat = 0
for j in 0..<factor {
total = total + waveformSample.samples[i*factor + j]
}
let averagedSample = total/CGFloat(factor)
downSampledAudioSamples.samples[i] = averagedSample
}
NSLog("Averaged samples")
}
You should use power of 2 size of your data
This will allow you to use just cheap bit shifts and simple resizing without any costly floating point operations or integer multiplicatin and division.
You should do half resolution mipmaps using previous mipmap
This will always create one sample from 2 samples of previous mipmap so no nested for loops or costly index computations
Do not mix floating and integer computations if you can avoid it
even if you have FPU the conversion between int and float is usually very slow. Ideally keep your audio data in integer format...
Here small C++/VCL example of these ideas:
//$$---- Form CPP ----
//---------------------------------------------------------------------------
#include <vcl.h>
#include <math.h>
#pragma hdrstop
#include "win_main.h"
//---------------------------------------------------------------------------
#pragma package(smart_init)
#pragma resource "*.dfm"
TForm1 *Form1;
//---------------------------------------------------------------------------
//---------------------------------------------------------------------------
int xs,ys; // screen resolution
Graphics::TBitmap *bmp; // back buffer bitmap for rendering
//---------------------------------------------------------------------------
// input data
const int samples=1024;
int sample[samples];
// mipmas max 32 resolutions -> 2^32 samples input
int *mmdat0[32]={NULL}, // min
*mmdat1[32]={NULL}, // max
mmsiz[32]={0}; // resolution
//---------------------------------------------------------------------------
void generate_input(int *data,int size)
{
int i; float a,da;
da=10.0*M_PI/float(size-1);
for (a=0.0,i=0;i<size;i++,a+=da)
{
data[i]=float(100.0*sin(a))+Random(40)-20;
}
}
//---------------------------------------------------------------------------
void mipmap_free()
{
// free allocated mipmaps if needed
if (mmdat0[0]) delete[] mmdat0[0];
mmdat0[0]=NULL;
mmdat1[0]=NULL;
mmsiz[0]=0;
}
//---------------------------------------------------------------------------
void mipmap_compute(int *data,int size)
{
int i,j,k,n,N,a,a0,a1;
mipmap_free();
for (N=0,n=size;n;N+=n,n>>=1); // compute siz of all mipmas together
mmdat0[0]=new int[N+N]; // allocate space for all mipmas as single 1D array
mmdat1[0]=mmdat0[0]+N; // max will be at the other half
mmsiz [0]=size;
for (i=1,n=size;n;n>>=1,i++) // and just set pointers of sub mipmas
{
mmdat0[i]=mmdat0[i-1]+n; // to point at the the right place
mmdat1[i]=mmdat1[i-1]+n; // to point at the the right place
mmsiz [i]=mmsiz [i-1]>>1; // and set resolution as half
}
// copy first mipmap
n=size;
for (i=0;i<mmsiz[0];i++)
{
a=data[i];
mmdat0[0][i]=a;
mmdat1[0][i]=a;
}
// process all resolutions
for (k=1;mmsiz[k];k++)
{
// halve resolution
for (i=0,j=0;i<mmsiz[k];i++)
{
a=mmdat0[k-1][j]; a0=a;
a=mmdat1[k-1][j]; j++; a1=a;
a=mmdat0[k-1][j]; if (a0>a) a0=a;
a=mmdat1[k-1][j]; j++; if (a1<a) a1=a;
mmdat0[k][i]=a0;
mmdat1[k][i]=a1;
}
}
}
//---------------------------------------------------------------------------
void draw() // just render of my App
{
bmp->Canvas->Brush->Color=clWhite;
bmp->Canvas->FillRect(TRect(0,0,xs,ys));
int ix,x,y,y0=ys>>1;
// plot input data
bmp->Canvas->Pen->Color=clBlack;
x=0; y=y0-sample[x];
bmp->Canvas->MoveTo(x,y);
for (x=1;x<xs;x++)
{
y=y0-sample[x];
bmp->Canvas->LineTo(x,y);
}
// plot mipmap[ix] input data
ix=1;
bmp->Canvas->Pen->Color=clBlue;
x=0; y=y0-sample[x];
bmp->Canvas->MoveTo(x,y);
for (x=0;x<mmsiz[ix];x++)
{
y=y0-mmdat0[ix][x];
bmp->Canvas->LineTo(x,y);
y=y0-mmdat1[ix][x];
bmp->Canvas->LineTo(x,y);
}
Form1->Canvas->Draw(0,0,bmp);
// bmp->SaveToFile("out.bmp");
}
//---------------------------------------------------------------------------
__fastcall TForm1::TForm1(TComponent* Owner):TForm(Owner) // init of my app
{
// init backbuffer
bmp=new Graphics::TBitmap;
bmp->HandleType=bmDIB;
bmp->PixelFormat=pf32bit;
generate_input(sample,samples);
mipmap_compute(sample,samples);
}
//---------------------------------------------------------------------------
void __fastcall TForm1::FormDestroy(TObject *Sender) // not important just destructor of my App
{
mipmap_free();
delete bmp;
}
//---------------------------------------------------------------------------
void __fastcall TForm1::FormResize(TObject *Sender) // not important just resize event
{
xs=ClientWidth;
ys=ClientHeight;
bmp->Width=xs;
bmp->Height=ys;
draw();
}
//-------------------------------------------------------------------------
void __fastcall TForm1::FormPaint(TObject *Sender) // not important just repaint event
{
draw();
}
//---------------------------------------------------------------------------
Ignore the window VCL and rendering related stuff (I just wanted to pass whole source so you can see how it is used). The important is only the function mipmap_compute which converts your input data to 2 mipmaps. One is holding min values and the other max values.
The dynamic allocatins are not important the only important code chunk is marked with comment:
// process all resolutions
Where for each mipmap there is only single for loop without any expensive operations. If your platform is better with branchless code you can compute the min,max using in-build brunchless functions min,max. Something like:
// process all resolutions
for (k=1;mmsiz[k];k++)
{
// halve resolution
for (i=0,j=0;i<mmsiz[k];i++)
{
a=mmdat0[k-1][j]; a0=a;
a=mmdat1[k-1][j]; j++; a1=a;
a=mmdat0[k-1][j]; a0=min(a0,a);
a=mmdat1[k-1][j]; j++; a1=max(a1,a);
mmdat0[k][i]=a0;
mmdat1[k][i]=a1;
}
}
This can be further optimized simply by using pointer to actually selected mipmaps that will get rid of the [k] and [k-1] indexes allowing one less memory access per each element access.
// process all resolutions
for (k=1;mmsiz[k];k++)
{
// halve resolution
int *p0=mmdat0[k-1];
int *p1=mmdat1[k-1];
int *q0=mmdat0[k];
int *q1=mmdat1[k];
for (i=0,j=0;i<mmsiz[k];i++)
{
a=p0[j]; a0=a;
a=p1[j]; j++; a1=a;
a=p0[j]; a0=min(a0,a);
a=p1[j]; j++; a1=max(a1,a);
q0[i]=a0;
q1[i]=a1;
}
}
Now all you need is to bilinearly interpolate between 2 mipmaps to achieve your resolution, here small example for this:
// actually rescaled output
int out0[samples]; // min
int out1[samples]; // max
int outs=0; // size
void resize(int n) // compute out0[n],out1[n] from mipmaps
{
int i,*p0,*p1,*q0,*q1,pn,qn;
int pc,qc,pd,qd,pi,qi;
int a,a0,a1,b0,b1,bm,bd;
for (i=0;mmsiz[i]>=n;i++); // find smaller resolution
pn=mmsiz[i];
p0=mmdat0[i];
p1=mmdat1[i]; i--;
qn=mmsiz[i]; // bigger or equal resolution
q0=mmdat0[i];
q1=mmdat1[i]; outs=n;
pc=0; pi=0;
qc=0; qi=0;
bm=n-pn; bd=qn-pn;
for (i=0;i<n-1;i++)
{
// bilinear interpolation (3x linear)
a0=q0[qi];
a1=q0[qi+1];
b1=a0+(((a1-a0)*qc)/n);
a0=p0[pi];
a1=p0[pi+1];
b0=a0+(((a1-a0)*pc)/n);
out0[i]=b0+(((b1-b0)*bm)/bd); // /bd might be bitshift right by log2(bd)
// bilinear interpolation (3x linear)
a0=q1[qi];
a1=q1[qi+1];
b1=a0+(((a1-a0)*qc)/n);
a0=p1[pi];
a1=p1[pi+1];
b0=a0+(((a1-a0)*pc)/n);
out1[i]=b0+(((b1-b0)*bm)/bd); // /bd might be bitshift right by log2(bd)
// DDA increment indexes
pc+=pn; while (pc>=n){ pi++; pc-=n; } // pi = (i*pn)/n
qc+=qn; while (qc>=n){ qi++; qc-=n; } // qi = (i*qn)/n
}
out0[n-1]=q0[pn-1];
out1[n-1]=q1[pn-1];
}
Beware target size n must be less or equal to highest mipmap resolution...
This is how it looks (when I change the resolution manually with mouse wheel):
The choppyness is caused by GIF grabber ... the scaling is fast and seamless in real.
I had a similar problem, with 1.800.000 points of a waveform to draw on an 800 points screen. The zoom factor was 2000. If someone is interested, that's how I got awesome results :
Divide the very long list into 400 smaller lists
For each smaller list calculate the biggest difference, between the smaller and larger value in that list.
Plot 2 points per list, one at (offset + delta / 2) and one at (offset - delta / 2)
Results :
from 453932 points to 800 points
Python code :
numberOfSmallerList = 400
small_list_len = int(len(big_list) / numberOfSmallerList)
finalPointsToPlot = []
for i in range(0, len(big_list), small_list_len):
biggestDiff = max(big_list[i:i+small_list_len]) -
min(big_list[i:i+small_list_len])
finalPointsToPlot.append(biggestDiff/2 + 100)
finalPointsToPlot.append(100 - biggestDiff/2)
import matplotlib.pyplot as plt
plt.plot(finalPointsToPlot)
plt.show()

How to plot sine wave?

I am new to C++ programming and I would like to plot a sine/cosine/square wave but I cannot find any resources to help me with it.
My goal is to produce any wave, and then perform a fourier transform of that wave and produce the resultant wave.
This code should work for you. Just make sure you run it on an IDE that has graphics.h. In many new IDEs graphics.h doesn't come by default and you have to add it first.
#include <iostream>
#include <conio.h>
#include <graphics.h>
#include <math.h>
using namespace std;
int main(){
initwindow(800,600);
int x,y;
line(0,500,getmaxx(),500); //to draw co-ordinate axes
line(500,0,500,getmaxy());
float pi = 3.14;
for(int i = -360; i < 360 ; i++){
x = (int)500+i;
y = (int)500 - sin(i*pi/100)*25;
putpixel(x,y,WHITE); //to plot points on the graph
}
getch(); //to see the resultant graph
closegraph();
return 0;
}

Putting image into a Window in x11

I have a QR code in .JPG format. I load it using OpenCV 3.4.4. Now, I create a new X11 window using XCreateSimpleWindow(). Then, I will resize the QR image to that of this new window.
Next, I want to put this resized QR code into the window. I tried using XPutImage(), but without any success, probably because I don't know the usage.
For using XPutImage(), I first took the image of the X11 window using XGetImage(); then obtained the pixel values of the QR image, then assigned that to the pixel value of the image obtained through XGetImage.
Once I had this XImage, I tried putting it to the window using XPutImage. But, it is still showing a black window.
There is no error in the terminal, but result is not as desired.
Any solution to this problem? Like, how to change the background of the window (X11) w.r.t a sample image, and using XPutImage()?
The code goes like this...
// Written by Ch. Tronche (http://tronche.lri.fr:8000/)
// Copyright by the author. This is unmaintained, no-warranty free software.
// Please use freely. It is appreciated (but by no means mandatory) to
// acknowledge the author's contribution. Thank you.
// Started on Thu Jun 26 23:29:03 1997
//
// Xlib tutorial: 2nd program
// Make a window appear on the screen and draw a line inside.
// If you don't understand this program, go to
// http://tronche.lri.fr:8000/gui/x/xlib-tutorial/2nd-program-anatomy.html
//
// compilation:
// g++ -o go qrinX11.cpp `pkg-config --cflags --libs opencv` -lX11
//
#include <opencv2/opencv.hpp>
#include "opencv2/opencv.hpp" // FOR OpenCV
#include <opencv2/core.hpp> // Basic OpenCV structures (cv::Mat)
#include <opencv2/videoio.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
#include <bits/stdc++.h>
#include <X11/Xlib.h> // Every Xlib program must include this
#include <assert.h> // I include this to test return values the lazy way
#include <unistd.h> // So we got the profile for 10 seconds
#include <X11/Xutil.h>
#include <X11/keysym.h>
#include <X11/Xlib.h> // Every Xlib program must include this
#include <X11/Xlib.h>
#include <X11/Xatom.h>
#include <X11/extensions/Xcomposite.h>
#include <X11/extensions/Xfixes.h>
#include <X11/extensions/shape.h>
#define NIL (0) // A name for the void pointer
using namespace cv;
using namespace std;
int main()
{
XGCValues gr_values;
//GC gc;
XColor color, dummy;
Display *dpy = XOpenDisplay(NIL);
//assert(dpy);
//int screen = DefaultScreen(dpy);
// Get some colors
int blackColor = BlackPixel(dpy, DefaultScreen(dpy));
int whiteColor = WhitePixel(dpy, DefaultScreen(dpy));
// Create the window
Window w = XCreateSimpleWindow(dpy, DefaultRootWindow(dpy), 0, 0,
200, 100, 0, whiteColor, blackColor);
// We want to get MapNotify events
XSelectInput(dpy, w, StructureNotifyMask);
XMapWindow(dpy, w);
// Wait for the MapNotify event
for(;;) {
XEvent e;
XNextEvent(dpy, &e);
if (e.type == MapNotify)
break;
}
Window focal = w;
XWindowAttributes gwa;
XGetWindowAttributes(dpy, w, &gwa);
int wd1 = gwa.width;
int ht1 = gwa.height;
XImage *image = XGetImage(dpy, w, 0, 0 , wd1, ht1, AllPlanes, ZPixmap);
unsigned long rm = image->red_mask;
unsigned long gm = image->green_mask;
unsigned long bm = image->blue_mask;
Mat img(ht1, wd1, CV_8UC3); // OpenCV Mat object is initilaized
Mat scrap = imread("qr.jpg");//(wid, ht, CV_8UC3);
resize(scrap, img, img.size(), CV_INTER_AREA);
for (int x = 0; x < wd1; x++)
for (int y = 0; y < ht1 ; y++)
{
unsigned long pixel = XGetPixel(image,x,y);
unsigned char blue = pixel & bm; // Applying the red/blue/green mask to obtain the indiv channel values
unsigned char green = (pixel & gm) >> 8;
unsigned char red = (pixel & rm) >> 16;
Vec3b color = img.at<Vec3b>(Point(x,y)); // Store RGB values in the OpenCV image
//color[0] = blue;
//color[1] = green;
//color[2] = red;
//img.at<Vec3b>(Point(x,y)) = color;
pixel = color[0];//&color[1]&color[2];
}
namedWindow("QR", CV_WINDOW_NORMAL);
imshow("QR", img);
cout << "herererere\n";
GC gc = XCreateGC(dpy, w, 0, NIL);
XPutImage(dpy, w, gc, image, 0, 0, wd1, ht1, wd1, ht1);
waitKey(0);
//sleep(3);
return 0;
}
Alright, solved it on my own. There was a silly mistake at changing the pixel value and updating it to the actual image and then putting it to the background of the window.
First use XPutPixel(), then use XPutImage()
Here is the final and correct method:
// compilation:
// g++ -o go qrinX11.cpp `pkg-config --cflags --libs opencv` -lX11
//
#include <opencv2/opencv.hpp>
#include "opencv2/opencv.hpp" // FOR OpenCV
#include <opencv2/core.hpp> // Basic OpenCV structures (cv::Mat)
#include <opencv2/videoio.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
#include <bits/stdc++.h>
#include <X11/Xlib.h> // Every Xlib program must include this
#include <assert.h> // I include this to test return values the lazy way
#include <unistd.h> // So we got the profile for 10 seconds
#include <X11/Xutil.h>
#include <X11/keysym.h>
#include <X11/Xlib.h> // Every Xlib program must include this
#include <X11/Xlib.h>
#include <X11/Xatom.h>
#include <X11/extensions/Xcomposite.h>
#include <X11/extensions/Xfixes.h>
#include <X11/extensions/shape.h>
#define NIL (0) // A name for the void pointer
using namespace cv;
using namespace std;
int main()
{
XGCValues gr_values;
//GC gc;
XColor color, dummy;
Display *dpy = XOpenDisplay(NIL);
//assert(dpy);
//int screen = DefaultScreen(dpy);
// Get some colors
int blackColor = BlackPixel(dpy, DefaultScreen(dpy));
int whiteColor = WhitePixel(dpy, DefaultScreen(dpy));
// Create the window
Window w = XCreateSimpleWindow(dpy, DefaultRootWindow(dpy), 0, 0,
200, 100, 0, whiteColor, blackColor);
// We want to get MapNotify events
XSelectInput(dpy, w, StructureNotifyMask);
XMapWindow(dpy, w);
// Wait for the MapNotify event
for(;;) {
XEvent e;
XNextEvent(dpy, &e);
if (e.type == MapNotify)
break;
}
Window focal = w;
XWindowAttributes gwa;
XGetWindowAttributes(dpy, w, &gwa);
int wd1 = gwa.width;
int ht1 = gwa.height;
XImage *image = XGetImage(dpy, w, 0, 0 , wd1, ht1, AllPlanes, ZPixmap);
unsigned long rm = image->red_mask;
unsigned long gm = image->green_mask;
unsigned long bm = image->blue_mask;
Mat img(ht1, wd1, CV_8UC3); // OpenCV Mat object is initilaized
Mat scrap = imread("qr.jpg");//(wid, ht, CV_8UC3);
resize(scrap, img, img.size(), CV_INTER_AREA);
for (int x = 0; x < wd1; x++)
for (int y = 0; y < ht1 ; y++)
{
unsigned long pixel = XGetPixel(image,x,y);
Vec3b color = img.at<Vec3b>(Point(x,y));
pixel = 65536 * color[2] + 256 * color[1] + color[0];
XPutPixel(image, x, y, pixel);
}
namedWindow("QR", CV_WINDOW_NORMAL);
imshow("QR", img);
GC gc = XCreateGC(dpy, w, 0, NIL);
XPutImage(dpy, w, gc, image, 0, 0, 0, 0, wd1, ht1);
waitKey(0);
return 0;
}
Simplicity is key, and improves performance (in this case):
//..
// Mat img(ht1, wd1, CV_8UC3); // OpenCV Mat object is initilaized
cv::Mat img(ht1, wd1, CV_8UC4, image->data); // initilaize with existing mem
Mat scrap = imread("qr.jpg");//(wid, ht, CV_8UC3);
cv::cvtColor(scrap,scrap,cv::COLOR_BGR2BGRA);
resize(scrap, img, img.size(), cv::INTER_AREA);
// .. and we can skip the for loops
namedWindow("QR", CV_WINDOW_NORMAL);
imshow("QR", img);
// .. etc

accumulatedweight throws cv:Exception error

I am new to OpenCV and trying to find contours and draw rectangle on them, here's my code but its throwing cv::Exception when it comes to accumulatedweighted().
i tried to make both src(Original Image) and dst(background) by converting to CV_32FC3 and then finding avg using accumulatedweighted.
#include "opencv2/video/tracking.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/imgproc/imgproc_c.h"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
#include <ctype.h>
using namespace cv;
using namespace std;
static void help()
{
cout << "\nThis is a Example to implement CAMSHIFT to detect multiple motion objects.\n";
}
Rect rect;
VideoCapture capture;
Mat currentFrame, currentFrame_grey, differenceImg, oldFrame_grey,background;
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
bool first = true;
int main(int argc, char* argv[])
{
//Create a new movie capture object.
capture.open(0);
if(!capture.isOpened())
{
//error in opening the video input
cerr << "Unable to open video file: " /*<< videoFilename*/ << endl;
exit(EXIT_FAILURE);
}
//capture current frame from webcam
capture >> currentFrame;
//Size of the image.
CvSize imgSize;
imgSize.width = currentFrame.size().width; //img.size().width
imgSize.height = currentFrame.size().height; ////img.size().height
//Images to use in the program.
currentFrame_grey.create( imgSize, IPL_DEPTH_8U);//image.create().
while(1)
{
capture >> currentFrame;//VideoCapture& VideoCapture::operator>>(Mat& image)
//Convert the image to grayscale.
cvtColor(currentFrame,currentFrame_grey,CV_RGB2GRAY);//cvtColor()
currentFrame.convertTo(currentFrame,CV_32FC3);
background = Mat::zeros(currentFrame.size(), CV_32FC3);
accumulateWeighted(currentFrame,background,1.0,NULL);
imshow("Background",background);
if(first) //Capturing Background for the first time
{
differenceImg = currentFrame_grey.clone();//img1 = img.clone()
oldFrame_grey = currentFrame_grey.clone();//img2 = img.clone()
convertScaleAbs(currentFrame_grey, oldFrame_grey, 1.0, 0.0);//convertscaleabs()
first = false;
continue;
}
//Minus the current frame from the moving average.
absdiff(oldFrame_grey,currentFrame_grey,differenceImg);//absDiff()
//bluring the differnece image
blur(differenceImg, differenceImg, imgSize);//blur()
//apply threshold to discard small unwanted movements
threshold(differenceImg, differenceImg, 25, 255, CV_THRESH_BINARY);//threshold()
//find contours
findContours(differenceImg,contours,hierarchy,CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0)); //findcontours()
//draw bounding box around each contour
//for(; contours! = 0; contours = contours->h_next)
for(int i = 0; i < contours.size(); i++)
{
rect = boundingRect(contours[i]); //extract bounding box for current contour
//drawing rectangle
rectangle(currentFrame, cvPoint(rect.x, rect.y), cvPoint(rect.x+rect.width, rect.y+rect.height), cvScalar(0, 0, 255, 0), 2, 8, 0);
}
//New Background
convertScaleAbs(currentFrame_grey, oldFrame_grey, 1.0, 0.0);
//display colour image with bounding box
imshow("Output Image", currentFrame);//imshow()
//display threshold image
imshow("Difference image", differenceImg);//imshow()
//clear memory and contours
//cvClearMemStorage( storage );
//contours = 0;
contours.clear();
//background = currentFrame;
//press Esc to exit
char c = cvWaitKey(33);
if( c == 27 ) break;
}
// Destroy All Windows.
destroyAllWindows();
return 0;
}
Please Help to solve this.
you might want to RTFM before asking here.
so, you missed the alpha param as well as the dst Mat in your call to addWeighted
Mat dst;
accumulateWeighted(currentFrame, 0.5 background, 0.5, 0, dst);
also, no idea, what the whole thing should achieve. adding up the current frame before diffing it does not make any sense to me.
if you planned to do background separation, throw it all away, and use one of the builtin backgroundsubtractors instead

Creating a simple black image with opencv using cvcreateimage

Very basic question coming from a newbie in OpenCV. I just want to create an image with every pixel set to 0 (black). I have used the following code in the main() function:
IplImage* imgScribble = cvCreateImage(cvSize(320, 240), 8, 3);
And what I get is a solid gray image, instead of the black one.
Thanks in advance !
What version of opencv you are using?
For Mat,
#include <opencv2/opencv.hpp>
cv::Mat image(320, 240, CV_8UC3, cv::Scalar(0, 0, 0));
I can suggest two more altrnatives:
IplImage* imgScribble = cvCreateImage(cvSize(320, 240), 8, 3);
// Here you can set any color
cvSet(imgScribble, cvScalar(0,0,0));
// Here only black
cvZero(imgScribble);
The call to
cvCreateImage(cvSize(320, 240), 8, 3);
Create the image in the memory, but I don't think it initialize the data.
You should try this to initialize :
step= imgScribble->widthStep;
data = (uchar *)imgScribble->imageData;
for(i=0;i<imgScribble->height;i++) for(j=0;j<img->width;j++) for(k=0;k<3;k++)
data[i*step+j*3+k]=0;
(Inspired from this (Example C Program))
For Python:
import numpy as np
X_DIMENSION = 288
Y_DIMENSION = 382
black_image = np.zeros((X_DIMENSION, Y_DIMENSION))
With this code you generate a numpy array which is what is expected for opencv images and fill it with zero which is the color for black. This code is made for grayscale images. If you want it to be an RGB black image, just add 3 at the end of the tupple to create the dimensions np.zeros((X_DIMENSION, Y_DIMENSION, 3))
black and white image mean single channel image. you can simply created it as follows.
Mat img(500, 1000, CV_8UC1, Scalar(a));
"a" in between 0-255
you can see more examlpe and details from following page
https://progtpoint.blogspot.com/2017/01/tutorial-3-create-image.html
Here is my contribution:
cv::Mat output = cv::Mat::zeros(cv::Size(320, 240), CV_8UC3);
#include "stdafx.h"
#include <opencv/cxcore.h>
#include <opencv/highgui.h>
#include <iostream>
using namespace cv;
using namespace std;
#define LOAD_OPTION CV_LOAD_IMAGE_COLOR
int main( int argc, char** argv )
{
IplImage *image;
image = cvLoadImage("picture.jpg",0); // 0 : BLACK AND WHITE , Without 0 -> Color Picture
cvNamedWindow("Image",CV_WINDOW_AUTOSIZE);
cvShowImage("Image", image);
waitKey(-1);
return 0;
}

Resources