accumulatedweight throws cv:Exception error - windows

I am new to OpenCV and trying to find contours and draw rectangle on them, here's my code but its throwing cv::Exception when it comes to accumulatedweighted().
i tried to make both src(Original Image) and dst(background) by converting to CV_32FC3 and then finding avg using accumulatedweighted.
#include "opencv2/video/tracking.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/imgproc/imgproc_c.h"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
#include <ctype.h>
using namespace cv;
using namespace std;
static void help()
{
cout << "\nThis is a Example to implement CAMSHIFT to detect multiple motion objects.\n";
}
Rect rect;
VideoCapture capture;
Mat currentFrame, currentFrame_grey, differenceImg, oldFrame_grey,background;
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
bool first = true;
int main(int argc, char* argv[])
{
//Create a new movie capture object.
capture.open(0);
if(!capture.isOpened())
{
//error in opening the video input
cerr << "Unable to open video file: " /*<< videoFilename*/ << endl;
exit(EXIT_FAILURE);
}
//capture current frame from webcam
capture >> currentFrame;
//Size of the image.
CvSize imgSize;
imgSize.width = currentFrame.size().width; //img.size().width
imgSize.height = currentFrame.size().height; ////img.size().height
//Images to use in the program.
currentFrame_grey.create( imgSize, IPL_DEPTH_8U);//image.create().
while(1)
{
capture >> currentFrame;//VideoCapture& VideoCapture::operator>>(Mat& image)
//Convert the image to grayscale.
cvtColor(currentFrame,currentFrame_grey,CV_RGB2GRAY);//cvtColor()
currentFrame.convertTo(currentFrame,CV_32FC3);
background = Mat::zeros(currentFrame.size(), CV_32FC3);
accumulateWeighted(currentFrame,background,1.0,NULL);
imshow("Background",background);
if(first) //Capturing Background for the first time
{
differenceImg = currentFrame_grey.clone();//img1 = img.clone()
oldFrame_grey = currentFrame_grey.clone();//img2 = img.clone()
convertScaleAbs(currentFrame_grey, oldFrame_grey, 1.0, 0.0);//convertscaleabs()
first = false;
continue;
}
//Minus the current frame from the moving average.
absdiff(oldFrame_grey,currentFrame_grey,differenceImg);//absDiff()
//bluring the differnece image
blur(differenceImg, differenceImg, imgSize);//blur()
//apply threshold to discard small unwanted movements
threshold(differenceImg, differenceImg, 25, 255, CV_THRESH_BINARY);//threshold()
//find contours
findContours(differenceImg,contours,hierarchy,CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0)); //findcontours()
//draw bounding box around each contour
//for(; contours! = 0; contours = contours->h_next)
for(int i = 0; i < contours.size(); i++)
{
rect = boundingRect(contours[i]); //extract bounding box for current contour
//drawing rectangle
rectangle(currentFrame, cvPoint(rect.x, rect.y), cvPoint(rect.x+rect.width, rect.y+rect.height), cvScalar(0, 0, 255, 0), 2, 8, 0);
}
//New Background
convertScaleAbs(currentFrame_grey, oldFrame_grey, 1.0, 0.0);
//display colour image with bounding box
imshow("Output Image", currentFrame);//imshow()
//display threshold image
imshow("Difference image", differenceImg);//imshow()
//clear memory and contours
//cvClearMemStorage( storage );
//contours = 0;
contours.clear();
//background = currentFrame;
//press Esc to exit
char c = cvWaitKey(33);
if( c == 27 ) break;
}
// Destroy All Windows.
destroyAllWindows();
return 0;
}
Please Help to solve this.

you might want to RTFM before asking here.
so, you missed the alpha param as well as the dst Mat in your call to addWeighted
Mat dst;
accumulateWeighted(currentFrame, 0.5 background, 0.5, 0, dst);
also, no idea, what the whole thing should achieve. adding up the current frame before diffing it does not make any sense to me.
if you planned to do background separation, throw it all away, and use one of the builtin backgroundsubtractors instead

Related

Separate image object into N sections of equal pixels (Approach)

Sorry in advance, this is more of an algorithmic problem rather than a coding problem, but I wasn't sure where to put it. For simplicity sake, say you have a binary image (white background, solid black object in foreground)
Example:
sample input
I want to divide this object (meaning only the black pixels) into N sections, all with the same number of pixels (so each section should contain (1/N)*(total # of black pixels)).
With the current algorithm that I'm using, I (1) find the total number of black pixels and (2) divide by N. Then I (3) scan the image row by row marking all black pixels. The result looks something like this:
current output sketch
The problem with this is the last (yellow) section, which isn't continuous. I want to divide the image in a way that makes more sense, like this:
ideal output
Basically, I'd like the boundary between the sections to be as short as possible.
I've been stumped on this for a while, but my old code just isn't cutting it anymore. I only need an approach to identifying the sections, I'll ultimately be outputting each section as individual images, as well as a grayscale copy of the input image where every pixel's value corresponds to its section number (these things I don't need help with). Any ideas?
I only need an approach to identifying the sections
According to this, I tried couple of approaches, these may help for guidelines:
Find contour of the image
Find the moments of contour and detect mass center.
For outer corners, you can simply use convex hull
Find the closest contour points(which are will be inner corners) to mass center
Then you can seperate it to desired regions by using these important points
Here is the result and code:
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <iostream>
using namespace cv;
using namespace std;
vector<Point>innerCorners;
bool isClose(Point test);
int main()
{
Mat src_gray;
int thresh = 100;
Mat src = imread("image/dir/star.png");
cvtColor( src, src_gray, COLOR_BGR2GRAY );
namedWindow( "Source",WINDOW_NORMAL );
Mat canny_output;
Canny( src_gray, canny_output, thresh, thresh*2 );
vector<vector<Point> > contours;
findContours( canny_output, contours, RETR_TREE, CHAIN_APPROX_SIMPLE );
vector<Vec4i> hierarchy;
vector<vector<Point> >hull( contours.size() );
vector<Moments> mu(contours.size() );
for( int i = 0; i <(int)contours.size(); i++ )
{ mu[i] = moments( contours[i], false ); }
for( size_t i = 0; i < contours.size(); i++ )
{
if(contours[i].size()>20)
convexHull( contours[i], hull[i] );
}
vector<Point2f> mc( contours.size() );
for( int i = 0; i <(int)contours.size(); i++ )
{ mc[i] = Point2f( mu[i].m10/mu[i].m00 , mu[i].m01/mu[i].m00 ); }
Mat drawing = Mat::zeros( canny_output.size(), CV_8UC3 );
int onlyOne = 1;
for( size_t i = 0; i< contours.size(); i++ )
{
if(contours[i].size()>20 && onlyOne)
{
circle( src, mc[i], 4, Scalar(0,255,255), -1, 8, 0 );
Scalar color = Scalar(255,0,0);
drawContours( drawing, contours, (int)i, color );
drawContours( src, hull, (int)i, color,5 );
Point centerMass = mc[i];
for(int a=0; a<(int)contours[i].size();a++)
{
if(cv::norm(cv::Mat(contours[i][a]),Mat(centerMass))<200 && isClose(contours[i][a]))
{
circle(src,contours[i][a],5,Scalar(0,0,255),10);
innerCorners.push_back(contours[i][a]);
line(src,contours[i][a],centerMass,Scalar(0,255,255),5);
}
}
onlyOne = 0;
}
}
namedWindow( "Hull demo",WINDOW_NORMAL );
imshow( "Hull demo", drawing );
imshow("Source", src );
waitKey();
return 0;
}
bool isClose(Point test){
if(innerCorners.size()==0)
return 1;
for(Point a:innerCorners)
if((cv::norm(cv::Mat(a),cv::Mat(test)))<70)
return 0;
return 1;
}

How to deblur image using fourier transform in open-cv or emgu-cv?

i saw this video about debluring images using fourier transform in matlab
video
and i want to convert the code in emgu cv
my code in emgucv :
string path = Environment.GetFolderPath(Environment.SpecialFolder.Desktop);
Image<Bgr, byte> img = new Image<Bgr, byte>(#"lal.png");
//blur the image
Image<Gray, byte> gray = img.Convert<Gray, byte>().SmoothBlur(31,31);
//convert image to float and get the fourier transform
Mat g_fl = gray.Convert<Gray, float>().Mat;
Matrix<float> dft_image = new Matrix<float>(g_fl.Size);
CvInvoke.Dft(g_fl, dft_image, Emgu.CV.CvEnum.DxtType.Forward, 0);
//here i make an image of kernel with size of the original
Image<Gray, float> ker = new Image<Gray, float>(img.Size);
ker.SetZero();
for (int x = 0; x < 31; x++)
{
for (int y = 0; y < 31; y++)
{
//31 * 31= 961
ker[y, x] = new Gray(1/961);
}
}
//get the fourier of the kernel
Matrix<float> dft_blur = new Matrix<float>(g_fl.Size);
CvInvoke.Dft(ker, dft_blur, Emgu.CV.CvEnum.DxtType.Forward, 0);
// fouier image / fourier blur
Matrix<float> res = new Matrix<float>(g_fl.Size);
for (int x=0;x<g_fl.Cols;x++)
{
for (int y = 0; y < g_fl.Rows; y++)
{
res[y, x] = dft_image[y, x] / dft_blur[y, x];
}
}
//get the inverse of fourier
Image<Gray, float> ready = new Image<Gray, float>(g_fl.Size);
CvInvoke.Dft(res, ready, Emgu.CV.CvEnum.DxtType.Inverse, 0);
CvInvoke.Imshow("deblur", ready.Convert<Gray,byte>());
CvInvoke.Imshow("original", gray);
CvInvoke.WaitKey(0);
but the result is black and not working , where is the mistake in my code
if you have a code in opencv python you can post it :)??
Thanks :)
My old implementation of wiener filter:
#include "stdafx.h"
#pragma once
#pragma comment(lib, "opencv_legacy220.lib")
#pragma comment(lib, "opencv_core220.lib")
#pragma comment(lib, "opencv_highgui220.lib")
#pragma comment(lib, "opencv_imgproc220.lib")
#include "c:\Users\Andrey\Documents\opencv\include\opencv\cv.h"
#include "c:\Users\Andrey\Documents\opencv\include\opencv\cxcore.h"
#include "c:\Users\Andrey\Documents\opencv\include\opencv\highgui.h"
#include <string>
#include <iostream>
#include <complex>
using namespace std;
using namespace cv;
//----------------------------------------------------------
// Compute real and implicit parts of FFT for given image
//----------------------------------------------------------
void ForwardFFT(Mat &Src, Mat *FImg)
{
int M = getOptimalDFTSize( Src.rows );
int N = getOptimalDFTSize( Src.cols );
Mat padded;
copyMakeBorder(Src, padded, 0, M - Src.rows, 0, N - Src.cols, BORDER_CONSTANT, Scalar::all(0));
// Create complex representation of our image
// planes[0] Real part, planes[1] Implicit part (zeros)
Mat planes[] = {Mat_<double>(padded), Mat::zeros(padded.size(), CV_64F)};
Mat complexImg;
merge(planes, 2, complexImg);
dft(complexImg, complexImg);
// As result, we also have Re and Im planes
split(complexImg, planes);
// Crop specter, if it have odd number of rows or cols
planes[0] = planes[0](Rect(0, 0, planes[0].cols & -2, planes[0].rows & -2));
planes[1] = planes[1](Rect(0, 0, planes[1].cols & -2, planes[1].rows & -2));
FImg[0]=planes[0].clone();
FImg[1]=planes[1].clone();
}
//----------------------------------------------------------
// Restore our image using specter
//----------------------------------------------------------
void InverseFFT(Mat *FImg,Mat &Dst)
{
Mat complexImg;
merge(FImg, 2, complexImg);
// Apply inverse FFT
idft(complexImg, complexImg);
split(complexImg, FImg);
Dst=FImg[0];
}
//----------------------------------------------------------
// Wiener filter
//----------------------------------------------------------
void wienerFilter(Mat &src,Mat &dst,Mat &_h,double k)
{
//---------------------------------------------------
// small number for numeric stability
//---------------------------------------------------
const double eps=1E-8;
//---------------------------------------------------
int ImgW=src.size().width;
int ImgH=src.size().height;
//--------------------------------------------------
Mat Yf[2];
ForwardFFT(src,Yf);
//--------------------------------------------------
Mat h;
h.create(ImgH,ImgW,CV_64F);
h=0;
_h.copyTo(h(Rect(0, 0, _h.size().width, _h.size().height)));
Mat Hf[2];
ForwardFFT(h,Hf);
//--------------------------------------------------
Mat Fu[2];
Fu[0].create(ImgH,ImgW,CV_64F);
Fu[1].create(ImgH,ImgW,CV_64F);
complex<double> a;
complex<double> b;
complex<double> c;
double Hf_Re;
double Hf_Im;
double Phf;
double hfz;
double hz;
double A;
for (int i=0;i<Hf[0].size().height;i++)
{
for (int j=0;j<Hf[0].size().width;j++)
{
Hf_Re=Hf[0].at<double>(i,j);
Hf_Im=Hf[1].at<double>(i,j);
Phf = Hf_Re*Hf_Re+Hf_Im*Hf_Im;
hfz=(Phf<eps)*eps;
hz =(h.at<double>(i,j)>0);
A=Phf/(Phf+hz+k);
a=complex<double>(Yf[0].at<double>(i,j),Yf[1].at<double>(i,j));
b=complex<double>(Hf_Re+hfz,Hf_Im+hfz);
c=a/b; // Deconvolution
// Other we do to remove division by 0
Fu[0].at<double>(i,j)=(c.real()*A);
Fu[1].at<double>(i,j)=(c.imag()*A);
}
}
//--------------------------------------------------
Fu[0]/=(ImgW*ImgH);
Fu[1]/=(ImgW*ImgH);
//--------------------------------------------------
InverseFFT(Fu,dst);
// remove out of rane values
for (int i=0;i<Hf[0].size().height;i++)
{
for (int j=0;j<Hf[0].size().width;j++)
{
if(dst.at<double>(i,j)>215){dst.at<double>(i,j)=215;}
if(dst.at<double>(i,j)<(-40)){dst.at<double>(i,j)=(-40);}
}
}
}
//----------------------------------------------------------
// Main
//----------------------------------------------------------
int _tmain(int argc, _TCHAR* argv[])
{
// Input image
Mat img;
// Load it from drive
img=imread("data/motion_fuzzy_lena.bmp",0);
//---------------------------------------------
imshow("Src image", img);
// Image size
int ImgW=img.size().width;
int ImgH=img.size().height;
// Deconvolution kernel (coefficient sum must be 1)
// Image was blurred using same kernel
Mat h;
h.create(1,10,CV_64F);
h=1/double(h.size().width*h.size().height);
// Apply filter
wienerFilter(img,img,h,0.05);
normalize(img,img, 0, 1, CV_MINMAX);
imshow("Result image", img);
cvWaitKey(0);
return 0;
}
The result:

Putting image into a Window in x11

I have a QR code in .JPG format. I load it using OpenCV 3.4.4. Now, I create a new X11 window using XCreateSimpleWindow(). Then, I will resize the QR image to that of this new window.
Next, I want to put this resized QR code into the window. I tried using XPutImage(), but without any success, probably because I don't know the usage.
For using XPutImage(), I first took the image of the X11 window using XGetImage(); then obtained the pixel values of the QR image, then assigned that to the pixel value of the image obtained through XGetImage.
Once I had this XImage, I tried putting it to the window using XPutImage. But, it is still showing a black window.
There is no error in the terminal, but result is not as desired.
Any solution to this problem? Like, how to change the background of the window (X11) w.r.t a sample image, and using XPutImage()?
The code goes like this...
// Written by Ch. Tronche (http://tronche.lri.fr:8000/)
// Copyright by the author. This is unmaintained, no-warranty free software.
// Please use freely. It is appreciated (but by no means mandatory) to
// acknowledge the author's contribution. Thank you.
// Started on Thu Jun 26 23:29:03 1997
//
// Xlib tutorial: 2nd program
// Make a window appear on the screen and draw a line inside.
// If you don't understand this program, go to
// http://tronche.lri.fr:8000/gui/x/xlib-tutorial/2nd-program-anatomy.html
//
// compilation:
// g++ -o go qrinX11.cpp `pkg-config --cflags --libs opencv` -lX11
//
#include <opencv2/opencv.hpp>
#include "opencv2/opencv.hpp" // FOR OpenCV
#include <opencv2/core.hpp> // Basic OpenCV structures (cv::Mat)
#include <opencv2/videoio.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
#include <bits/stdc++.h>
#include <X11/Xlib.h> // Every Xlib program must include this
#include <assert.h> // I include this to test return values the lazy way
#include <unistd.h> // So we got the profile for 10 seconds
#include <X11/Xutil.h>
#include <X11/keysym.h>
#include <X11/Xlib.h> // Every Xlib program must include this
#include <X11/Xlib.h>
#include <X11/Xatom.h>
#include <X11/extensions/Xcomposite.h>
#include <X11/extensions/Xfixes.h>
#include <X11/extensions/shape.h>
#define NIL (0) // A name for the void pointer
using namespace cv;
using namespace std;
int main()
{
XGCValues gr_values;
//GC gc;
XColor color, dummy;
Display *dpy = XOpenDisplay(NIL);
//assert(dpy);
//int screen = DefaultScreen(dpy);
// Get some colors
int blackColor = BlackPixel(dpy, DefaultScreen(dpy));
int whiteColor = WhitePixel(dpy, DefaultScreen(dpy));
// Create the window
Window w = XCreateSimpleWindow(dpy, DefaultRootWindow(dpy), 0, 0,
200, 100, 0, whiteColor, blackColor);
// We want to get MapNotify events
XSelectInput(dpy, w, StructureNotifyMask);
XMapWindow(dpy, w);
// Wait for the MapNotify event
for(;;) {
XEvent e;
XNextEvent(dpy, &e);
if (e.type == MapNotify)
break;
}
Window focal = w;
XWindowAttributes gwa;
XGetWindowAttributes(dpy, w, &gwa);
int wd1 = gwa.width;
int ht1 = gwa.height;
XImage *image = XGetImage(dpy, w, 0, 0 , wd1, ht1, AllPlanes, ZPixmap);
unsigned long rm = image->red_mask;
unsigned long gm = image->green_mask;
unsigned long bm = image->blue_mask;
Mat img(ht1, wd1, CV_8UC3); // OpenCV Mat object is initilaized
Mat scrap = imread("qr.jpg");//(wid, ht, CV_8UC3);
resize(scrap, img, img.size(), CV_INTER_AREA);
for (int x = 0; x < wd1; x++)
for (int y = 0; y < ht1 ; y++)
{
unsigned long pixel = XGetPixel(image,x,y);
unsigned char blue = pixel & bm; // Applying the red/blue/green mask to obtain the indiv channel values
unsigned char green = (pixel & gm) >> 8;
unsigned char red = (pixel & rm) >> 16;
Vec3b color = img.at<Vec3b>(Point(x,y)); // Store RGB values in the OpenCV image
//color[0] = blue;
//color[1] = green;
//color[2] = red;
//img.at<Vec3b>(Point(x,y)) = color;
pixel = color[0];//&color[1]&color[2];
}
namedWindow("QR", CV_WINDOW_NORMAL);
imshow("QR", img);
cout << "herererere\n";
GC gc = XCreateGC(dpy, w, 0, NIL);
XPutImage(dpy, w, gc, image, 0, 0, wd1, ht1, wd1, ht1);
waitKey(0);
//sleep(3);
return 0;
}
Alright, solved it on my own. There was a silly mistake at changing the pixel value and updating it to the actual image and then putting it to the background of the window.
First use XPutPixel(), then use XPutImage()
Here is the final and correct method:
// compilation:
// g++ -o go qrinX11.cpp `pkg-config --cflags --libs opencv` -lX11
//
#include <opencv2/opencv.hpp>
#include "opencv2/opencv.hpp" // FOR OpenCV
#include <opencv2/core.hpp> // Basic OpenCV structures (cv::Mat)
#include <opencv2/videoio.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
#include <bits/stdc++.h>
#include <X11/Xlib.h> // Every Xlib program must include this
#include <assert.h> // I include this to test return values the lazy way
#include <unistd.h> // So we got the profile for 10 seconds
#include <X11/Xutil.h>
#include <X11/keysym.h>
#include <X11/Xlib.h> // Every Xlib program must include this
#include <X11/Xlib.h>
#include <X11/Xatom.h>
#include <X11/extensions/Xcomposite.h>
#include <X11/extensions/Xfixes.h>
#include <X11/extensions/shape.h>
#define NIL (0) // A name for the void pointer
using namespace cv;
using namespace std;
int main()
{
XGCValues gr_values;
//GC gc;
XColor color, dummy;
Display *dpy = XOpenDisplay(NIL);
//assert(dpy);
//int screen = DefaultScreen(dpy);
// Get some colors
int blackColor = BlackPixel(dpy, DefaultScreen(dpy));
int whiteColor = WhitePixel(dpy, DefaultScreen(dpy));
// Create the window
Window w = XCreateSimpleWindow(dpy, DefaultRootWindow(dpy), 0, 0,
200, 100, 0, whiteColor, blackColor);
// We want to get MapNotify events
XSelectInput(dpy, w, StructureNotifyMask);
XMapWindow(dpy, w);
// Wait for the MapNotify event
for(;;) {
XEvent e;
XNextEvent(dpy, &e);
if (e.type == MapNotify)
break;
}
Window focal = w;
XWindowAttributes gwa;
XGetWindowAttributes(dpy, w, &gwa);
int wd1 = gwa.width;
int ht1 = gwa.height;
XImage *image = XGetImage(dpy, w, 0, 0 , wd1, ht1, AllPlanes, ZPixmap);
unsigned long rm = image->red_mask;
unsigned long gm = image->green_mask;
unsigned long bm = image->blue_mask;
Mat img(ht1, wd1, CV_8UC3); // OpenCV Mat object is initilaized
Mat scrap = imread("qr.jpg");//(wid, ht, CV_8UC3);
resize(scrap, img, img.size(), CV_INTER_AREA);
for (int x = 0; x < wd1; x++)
for (int y = 0; y < ht1 ; y++)
{
unsigned long pixel = XGetPixel(image,x,y);
Vec3b color = img.at<Vec3b>(Point(x,y));
pixel = 65536 * color[2] + 256 * color[1] + color[0];
XPutPixel(image, x, y, pixel);
}
namedWindow("QR", CV_WINDOW_NORMAL);
imshow("QR", img);
GC gc = XCreateGC(dpy, w, 0, NIL);
XPutImage(dpy, w, gc, image, 0, 0, 0, 0, wd1, ht1);
waitKey(0);
return 0;
}
Simplicity is key, and improves performance (in this case):
//..
// Mat img(ht1, wd1, CV_8UC3); // OpenCV Mat object is initilaized
cv::Mat img(ht1, wd1, CV_8UC4, image->data); // initilaize with existing mem
Mat scrap = imread("qr.jpg");//(wid, ht, CV_8UC3);
cv::cvtColor(scrap,scrap,cv::COLOR_BGR2BGRA);
resize(scrap, img, img.size(), cv::INTER_AREA);
// .. and we can skip the for loops
namedWindow("QR", CV_WINDOW_NORMAL);
imshow("QR", img);
// .. etc

DX11 convert pixel format BGRA to RGBA

I have currently the problem that a library creates a DX11 texture with BGRA pixel format.
But the displaying library can only display RGBA correctly. (This means the colors are swapped in the rendered image)
After looking around I found a simple for-loop to solve the problem, but the performance is not very good and scales bad with higher resolutions. I'm new to DirectX and maybe I just missed a simple function to do the converting.
// Get the image data
unsigned char* pDest = view->image->getPixels();
// Prepare source texture
ID3D11Texture2D* pTexture = static_cast<ID3D11Texture2D*>( tex );
// Get context
ID3D11DeviceContext* pContext = NULL;
dxDevice11->GetImmediateContext(&pContext);
// Copy data, fast operation
pContext->CopySubresourceRegion(texStaging, 0, 0, 0, 0, tex, 0, nullptr);
// Create mapping
D3D11_MAPPED_SUBRESOURCE mapped;
HRESULT hr = pContext->Map( texStaging, 0, D3D11_MAP_READ, 0, &mapped );
if ( FAILED( hr ) )
{
return;
}
// Calculate size
const size_t size = _width * _height * 4;
// Access pixel data
unsigned char* pSrc = static_cast<unsigned char*>( mapped.pData );
// Offsets
int offsetSrc = 0;
int offsetDst = 0;
int rowOffset = mapped.RowPitch % _width;
// Loop through it, BRGA to RGBA conversation
for (int row = 0; row < _height; ++row)
{
for (int col = 0; col < _width; ++col)
{
pDest[offsetDst] = pSrc[offsetSrc+2];
pDest[offsetDst+1] = pSrc[offsetSrc+1];
pDest[offsetDst+2] = pSrc[offsetSrc];
pDest[offsetDst+3] = pSrc[offsetSrc+3];
offsetSrc += 4;
offsetDst += 4;
}
// Adjuste offset
offsetSrc += rowOffset;
}
// Unmap texture
pContext->Unmap( texStaging, 0 );
Solution:
Texture2D txDiffuse : register(t0);
SamplerState texSampler : register(s0);
struct VSScreenQuadOutput
{
float4 Position : SV_POSITION;
float2 TexCoords0 : TEXCOORD0;
};
float4 PSMain(VSScreenQuadOutput input) : SV_Target
{
return txDiffuse.Sample(texSampler, input.TexCoords0).rgba;
}
Obviously iterating over a texture on you CPU is not the most effective way. If you know that colors in a texture are always swapped like that and you don't want to modify the texture itself in your C++ code, the most straightforward way would be to do it in the pixel shader. When you sample the texture, simply swap colors there. You won't even notice any performance drop.

Image flips, OpenGL output to JPEG using libjpeg

The below code helps me to convert OpenGL output to JPEG image using libjpg but the resultant image is flipped vertical...
The code works perfect but the final image is flipped I dont know why ?!
unsigned char *pdata = new unsigned char[width*height*3];
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, pdata);
FILE *outfile;
if ((outfile = fopen("sample.jpeg", "wb")) == NULL) {
printf("can't open %s");
exit(1);
}
struct jpeg_compress_struct cinfo;
struct jpeg_error_mgr jerr;
cinfo.err = jpeg_std_error(&jerr);
jpeg_create_compress(&cinfo);
jpeg_stdio_dest(&cinfo, outfile);
cinfo.image_width = width;
cinfo.image_height = height;
cinfo.input_components = 3;
cinfo.in_color_space = JCS_RGB;
jpeg_set_defaults(&cinfo);
/*set the quality [0..100] */
jpeg_set_quality (&cinfo, 100, true);
jpeg_start_compress(&cinfo, true);
JSAMPROW row_pointer;
int row_stride = width * 3;
while (cinfo.next_scanline < cinfo.image_height) {
row_pointer = (JSAMPROW) &pdata[cinfo.next_scanline*row_stride];
jpeg_write_scanlines(&cinfo, &row_pointer, 1);
}
jpeg_finish_compress(&cinfo);
fclose(outfile);
jpeg_destroy_compress(&cinfo);
OpenGL's coordinate system has the origin in the lower left corner of the image. LIBJPEG assumes that the origin of the image is in the upper left corner of the image. Make the following change to fix your code:
while (cinfo.next_scanline < cinfo.image_height)
{
row_pointer = (JSAMPROW) &pdata[(cinfo.image_height-1-cinfo.next_scanline)*row_stride];
jpeg_write_scanlines(&cinfo, &row_pointer, 1);
}

Resources