I am trying to make a c++ emulator for the original space invaders arcade machine. Everything works just fine but it is slow. After timing several functions in my project i found that the biggest drain is my draw screen functions. It takes +100ms but i need 16ms/frame, e.g. 60Hz. This is my function:
unsigned char *fb = &state->memory[0x2400];
Bitmap ^ bmp = gcnew Bitmap(512, 448);
for (int j = 1; j < 224; j++)
{
for (int i = 0; i < 256; i+=8)
{
unsigned char pix = fb[(j * 256 / 8) + i / 8];
for (int p = 0; p < 8; p++)
{
if (0 != (pix & (1 << p)))
{
bmp->SetPixel(i + p, j, Color::White);
}
else
{
bmp->SetPixel(i + p, j, Color::Black);
}
}
}
}
bmp->RotateFlip(RotateFlipType::Rotate270FlipNone);
this->pictureBox1->Image = bmp;
fb is my framebuffer, a byte array with 8 pixels per byte, 1 for white, 0 for black.
Browsing the internet i found that the "slow part" is the SetPixel method, but i couldn't find a better method to do this.
Thanks to Loathings comment i solved the problem. Instead of redrawing every pixel, i first clear the image to black and then only the white pixels are drawn. Had to use a buffer to prevent flickering
unsigned char *fb = &state->memory[0x2400];
Bitmap ^ bmp = gcnew Bitmap(512, 448);
Image ^buffer = this->pictureBox1->Image;
Graphics ^g = Graphics::FromImage(buffer);
for (int j = 1; j < 224; j++)
{
for (int i = 0; i < 256; i += 8)
{
unsigned char pix = fb[(j * 256 / 8) + i / 8];
for (int p = 0; p < 8; p++)
{
if (0 != (pix & (1 << p)))
{
bmp->SetPixel(i + p, j, Color::White);
}
}
}
}
bmp->RotateFlip(RotateFlipType::Rotate270FlipNone);
g->Clear(Color::Black);
buffer = bmp;
this->pictureBox1->Image = buffer;
and at initialisation i added next code to prevent nullexception being thrown,
Bitmap ^def = gcnew Bitmap(20, 20);
this->pictureBox1->Image = def;
Update: Although this method works for my project, considering the screen will be mostly composed of only black pixels. GetPixel and SetPixel are slow, and shouldn't be used with performance sensitive programs. LockBits will be the way to go. I didn't implement this because i got the performance i needed by just clearing the graphics interface
Regarding to this post on StackOverflow (and this too) I'm taking one normal image of a flower, then a white image and then I apply the SoftLight.
These are the images (flower and white image):
The result should be something similar of what I've got in GIMP:
but it's finally a white image.
I modified the code in order to put it inside a function, and this is my code:
// function
uint convSoftLight(int A, int B) {
return ((uint)((B < 128)?(2*((A>>1)+64))*((float)B/255):(255-(2*(255-((A>>1)+64))*(float)(255-B)/255))));
}
void function() {
Mat flower = imread("/Users/rafaelruizmunoz/Desktop/flower.jpg");
Mat white_flower = Mat::zeros(Size(flower.cols, flower.rows), flower.type());
Mat mix = Mat::zeros(Size(flower.cols, flower.rows), flower.type());
for (int i = 0; i < white_flower.rows; i++) {
for (int j = 0; j < white_flower.cols; j++) {
white_flower.at<Vec3b>(i,j) = Vec3b(255,255,255);
}
}
imshow("flower", flower);
imshow("mask_white", white_flower);
for (int i = 0; i < mix.rows; i++) {
for (int j = 0; j < mix.cols; j++) {
Vec3b vec = flower.at<Vec3b>(i,j);
vec[0] = convSoftLight(vec[0], 255); // 255 or just the white_flower pixel at (i,j)
vec[1] = convSoftLight(vec[1], 255); // 255 or just the white_flower pixel at (i,j)
vec[2] = convSoftLight(vec[2], 255); // 255 or just the white_flower pixel at (i,j)
mix.at<Vec3b>(i,j) = vec;
}
}
imshow("mix", mix);
}
What am I doing wrong?
Thank you.
EDIT: I've tried to flip the order (convSoftLight(B,A); instead convSoftLight(A,B)), but nothing happened (black image)
Based on the blender definitions: I rewrote my function:
uint convSoftLight(int A, int B) {
float a = (float)A / 255;
float b = (float)B / 255;
float result = 0;
if (b < 0.5)
result = 2 * a * b + pow(a,2) * (1 - 2*b);
else
result = 2 * a * (1-b) + sqrt(a) * (2*b - 1);
return (uint)255* result;
}
Here's how soft light might be implemented in Python (with OpenCV and NumPy):
import numpy as np
def applySoftLight(bottom, top, mask):
""" Apply soft light blending
"""
assert all(image.dtype == np.float32 for image in [bottom, top, mask])
blend = np.zeros(bottom.shape, dtype=np.float32)
low = np.where((top < 0.5) & (mask > 0))
blend[low] = 2 * bottom[low] * top[low] + bottom[low] * bottom[low] * (1 - 2 * top[low])
high = np.where((top >= 0.5) & (mask > 0))
blend[high] = 2 * bottom[high] * (1 - top[high]) + np.sqrt(bottom[high]) * (2 * top[high] - 1)
# alpha blending accroding to mask
result = bottom * (1 - mask) + blend * mask
return result
All matrices must be single channel 2D matrices converted into type np.float32. Mask is a "layer mask" in terms of GIMP/Photoshop.
I'm trying to implement the method of improving fingerprint images by Anil Jain. As a starter, I encountered some difficulties while extracting the orientation image, and am strictly following those steps described in Section 2.4 of that paper.
So, this is the input image:
And this is after normalization using exactly the same method as in that paper:
I'm expecting to see something like this (an example from the internet):
However, this is what I got for displaying obtained orientation matrix:
Obviously this is wrong, and it also gives non-zero values for those zero points in the original input image.
This is the code I wrote:
cv::Mat orientation(cv::Mat inputImage)
{
cv::Mat orientationMat = cv::Mat::zeros(inputImage.size(), CV_8UC1);
// compute gradients at each pixel
cv::Mat grad_x, grad_y;
cv::Sobel(inputImage, grad_x, CV_16SC1, 1, 0, 3, 1, 0, cv::BORDER_DEFAULT);
cv::Sobel(inputImage, grad_y, CV_16SC1, 0, 1, 3, 1, 0, cv::BORDER_DEFAULT);
cv::Mat Vx, Vy, theta, lowPassX, lowPassY;
cv::Mat lowPassX2, lowPassY2;
Vx = cv::Mat::zeros(inputImage.size(), inputImage.type());
Vx.copyTo(Vy);
Vx.copyTo(theta);
Vx.copyTo(lowPassX);
Vx.copyTo(lowPassY);
Vx.copyTo(lowPassX2);
Vx.copyTo(lowPassY2);
// estimate the local orientation of each block
int blockSize = 16;
for(int i = blockSize/2; i < inputImage.rows - blockSize/2; i+=blockSize)
{
for(int j = blockSize / 2; j < inputImage.cols - blockSize/2; j+= blockSize)
{
float sum1 = 0.0;
float sum2 = 0.0;
for ( int u = i - blockSize/2; u < i + blockSize/2; u++)
{
for( int v = j - blockSize/2; v < j+blockSize/2; v++)
{
sum1 += grad_x.at<float>(u,v) * grad_y.at<float>(u,v);
sum2 += (grad_x.at<float>(u,v)*grad_x.at<float>(u,v)) * (grad_y.at<float>(u,v)*grad_y.at<float>(u,v));
}
}
Vx.at<float>(i,j) = sum1;
Vy.at<float>(i,j) = sum2;
double calc = 0.0;
if(sum1 != 0 && sum2 != 0)
{
calc = 0.5 * atan(Vy.at<float>(i,j) / Vx.at<float>(i,j));
}
theta.at<float>(i,j) = calc;
// Perform low-pass filtering
float angle = 2 * calc;
lowPassX.at<float>(i,j) = cos(angle * pi / 180);
lowPassY.at<float>(i,j) = sin(angle * pi / 180);
float sum3 = 0.0;
float sum4 = 0.0;
for(int u = -lowPassSize / 2; u < lowPassSize / 2; u++)
{
for(int v = -lowPassSize / 2; v < lowPassSize / 2; v++)
{
sum3 += inputImage.at<float>(u,v) * lowPassX.at<float>(i - u*lowPassSize, j - v * lowPassSize);
sum4 += inputImage.at<float>(u, v) * lowPassY.at<float>(i - u*lowPassSize, j - v * lowPassSize);
}
}
lowPassX2.at<float>(i,j) = sum3;
lowPassY2.at<float>(i,j) = sum4;
float calc2 = 0.0;
if(sum3 != 0 && sum4 != 0)
{
calc2 = 0.5 * atan(lowPassY2.at<float>(i, j) / lowPassX2.at<float>(i, j)) * 180 / pi;
}
orientationMat.at<float>(i,j) = calc2;
}
}
return orientationMat;
}
I've already searched a lot on the web, but almost all of them are in Matlab. And there exist very few ones using OpenCV, but they didn't help me either. I sincerely hope someone could go through my code and point out any error to help. Thank you in advance.
Update
Here are the steps that I followed according to the paper:
Obtain normalized image G.
Divide G into blocks of size wxw (16x16).
Compute the x and y gradients at each pixel (i,j).
Estimate the local orientation of each block centered at pixel (i,j) using equations:
Perform low-pass filtering to remove noise. For that, convert the orientation image into a continuous vector field defined as:
where W is a two-dimensional low-pass filter, and w(phi) x w(phi) is its size, which equals to 5.
Finally, compute the local ridge orientation at (i,j) using:
Update2
This is the output of orientationMat after changing the mat type to CV_16SC1 in Sobel operation as Micka suggested:
Maybe it's too late for me to answer, but anyway somebody could read this later and solve the same problem.
I've been working for a while in the same algorithm, same method you posted... But there's some writting errors when the papper was redacted (I guess). After fighting a lot with the equations I found this errors by looking other similar works.
Here is what worked for me...
Vy(i, j) = 2*dx(u,v)*dy(u,v)
Vx(i,j) = dx(u,v)^2 - dy(u,v)^2
O(i,j) = 0.5*arctan(Vy(i,j)/Vx(i,j)
(Excuse me I wasn't able to post images, so I wrote the modified ecuations. Remeber "u" and "v" are positions of the summation across the BlockSize by BlockSize window)
The first thing and most important (obviously) are the equations, I saw that in different works this expressions were really different and in every one they talked about the same algorithm of Hong et al.
The Key is finding the Least Mean Square (First 3 equations) of the gradients (Vx and Vy), I provided the corrected formulas above for this ation. Then you can compute angle theta for the non overlapping window (16x16 size recommended in the papper), after that the algorithm says you must calculate the magnitud of the doubled angle in "x" and "y" directions (Phi_x and Phi_y).
Phi_x(i,j) = V(i,j) * cos(2*O(i,j))
Phi_y(i,j) = V(i,j) * sin(2*O(i,j))
Magnitud is just:
V = sqrt(Vx(i,j)^2 + Vy(i,j)^2)
Note that in the related work doesn't mention that you have to use the gradient magnitud, but it make sense (for me) in doing it. After all this corrections you can apply the low pass filter to Phi_x and Phi_y, I used a simple Mask of size 5x5 to average this magnitudes (something like medianblur() of opencv).
Last thing is to calculate new angle, that is the average of the 25ith neighbors in the O(i,j) image, for this you just have to:
O'(i,j) = 0.5*arctan(Phi_y/Phi_x)
We're just there... All this just for calculating the angle of the NORMAL VECTOR TO THE RIDGES DIRECTIONS (O'(i,j)) in the BlockSize by BlockSize non overlapping window, what does it mean? it means that the angle we just calculated is perpendicular to the ridges, in simple words we just calculated the angle of the riges plus 90 degrees... To get the angle we need, we just have to substract to the obtained angle 90°.
To draw the lines we need to have an initial point (X0, Y0) and a final point(X1, Y1). For that imagine a circle centered on (X0, Y0) with a radious of "r":
x0 = i + blocksize/2
y0 = j + blocksize/2
r = blocksize/2
Note we add i and j to the first coordinates becouse the window is moving and we are gonna draw the line starting from the center of the non overlaping window, so we can't use just the center of the non overlaping window.
Then to calculate the end coordinates to draw a line we can just have to use a right triangle so...
X1 = r*cos(O'(i,j)-90°)+X0
Y1 = r*sin(O'(i,j)-90°)+Y0
X2 = X0-r*cos(O'(i,j)-90°)
Y2 = Y0-r*cos(O'(i,j)-90°)
Then just use opencv line function, where initial Point is (X0,Y0) and final Point is (X1, Y1). Additional to it, I drawed the windows of 16x16 and computed the oposite points of X1 and Y1 (X2 and Y2) to draw a line of the entire window.
Hope this help somebody.
My results...
Main function:
Mat mat = imread("nwmPa.png",0);
mat.convertTo(mat, CV_32F, 1.0/255, 0);
Normalize(mat);
int blockSize = 6;
int height = mat.rows;
int width = mat.cols;
Mat orientationMap;
orientation(mat, orientationMap, blockSize);
Normalize:
void Normalize(Mat & image)
{
Scalar mean, dev;
meanStdDev(image, mean, dev);
double M = mean.val[0];
double D = dev.val[0];
for(int i(0) ; i<image.rows ; i++)
{
for(int j(0) ; j<image.cols ; j++)
{
if(image.at<float>(i,j) > M)
image.at<float>(i,j) = 100.0/255 + sqrt( 100.0/255*pow(image.at<float>(i,j)-M,2)/D );
else
image.at<float>(i,j) = 100.0/255 - sqrt( 100.0/255*pow(image.at<float>(i,j)-M,2)/D );
}
}
}
Orientation map:
void orientation(const Mat &inputImage, Mat &orientationMap, int blockSize)
{
Mat fprintWithDirectionsSmoo = inputImage.clone();
Mat tmp(inputImage.size(), inputImage.type());
Mat coherence(inputImage.size(), inputImage.type());
orientationMap = tmp.clone();
//Gradiants x and y
Mat grad_x, grad_y;
// Sobel(inputImage, grad_x, CV_32F, 1, 0, 3, 1, 0, BORDER_DEFAULT);
// Sobel(inputImage, grad_y, CV_32F, 0, 1, 3, 1, 0, BORDER_DEFAULT);
Scharr(inputImage, grad_x, CV_32F, 1, 0, 1, 0);
Scharr(inputImage, grad_y, CV_32F, 0, 1, 1, 0);
//Vector vield
Mat Fx(inputImage.size(), inputImage.type()),
Fy(inputImage.size(), inputImage.type()),
Fx_gauss,
Fy_gauss;
Mat smoothed(inputImage.size(), inputImage.type());
// Local orientation for each block
int width = inputImage.cols;
int height = inputImage.rows;
int blockH;
int blockW;
//select block
for(int i = 0; i < height; i+=blockSize)
{
for(int j = 0; j < width; j+=blockSize)
{
float Gsx = 0.0;
float Gsy = 0.0;
float Gxx = 0.0;
float Gyy = 0.0;
//for check bounds of img
blockH = ((height-i)<blockSize)?(height-i):blockSize;
blockW = ((width-j)<blockSize)?(width-j):blockSize;
//average at block WхW
for ( int u = i ; u < i + blockH; u++)
{
for( int v = j ; v < j + blockW ; v++)
{
Gsx += (grad_x.at<float>(u,v)*grad_x.at<float>(u,v)) - (grad_y.at<float>(u,v)*grad_y.at<float>(u,v));
Gsy += 2*grad_x.at<float>(u,v) * grad_y.at<float>(u,v);
Gxx += grad_x.at<float>(u,v)*grad_x.at<float>(u,v);
Gyy += grad_y.at<float>(u,v)*grad_y.at<float>(u,v);
}
}
float coh = sqrt(pow(Gsx,2) + pow(Gsy,2)) / (Gxx + Gyy);
//smoothed
float fi = 0.5*fastAtan2(Gsy, Gsx)*CV_PI/180;
Fx.at<float>(i,j) = cos(2*fi);
Fy.at<float>(i,j) = sin(2*fi);
//fill blocks
for ( int u = i ; u < i + blockH; u++)
{
for( int v = j ; v < j + blockW ; v++)
{
orientationMap.at<float>(u,v) = fi;
Fx.at<float>(u,v) = Fx.at<float>(i,j);
Fy.at<float>(u,v) = Fy.at<float>(i,j);
coherence.at<float>(u,v) = (coh<0.85)?1:0;
}
}
}
} ///for
GaussConvolveWithStep(Fx, Fx_gauss, 5, blockSize);
GaussConvolveWithStep(Fy, Fy_gauss, 5, blockSize);
for(int m = 0; m < height; m++)
{
for(int n = 0; n < width; n++)
{
smoothed.at<float>(m,n) = 0.5*fastAtan2(Fy_gauss.at<float>(m,n), Fx_gauss.at<float>(m,n))*CV_PI/180;
if((m%blockSize)==0 && (n%blockSize)==0){
int x = n;
int y = m;
int ln = sqrt(2*pow(blockSize,2))/2;
float dx = ln*cos( smoothed.at<float>(m,n) - CV_PI/2);
float dy = ln*sin( smoothed.at<float>(m,n) - CV_PI/2);
arrowedLine(fprintWithDirectionsSmoo, Point(x, y+blockH), Point(x + dx, y + blockW + dy), Scalar::all(255), 1, CV_AA, 0, 0.06*blockSize);
// qDebug () << Fx_gauss.at<float>(m,n) << Fy_gauss.at<float>(m,n) << smoothed.at<float>(m,n);
// imshow("Orientation", fprintWithDirectionsSmoo);
// waitKey(0);
}
}
}///for2
normalize(orientationMap, orientationMap,0,1,NORM_MINMAX);
imshow("Orientation field", orientationMap);
orientationMap = smoothed.clone();
normalize(smoothed, smoothed, 0, 1, NORM_MINMAX);
imshow("Smoothed orientation field", smoothed);
imshow("Coherence", coherence);
imshow("Orientation", fprintWithDirectionsSmoo);
}
seems nothing forgot )
I have read your code thoroughly and found that you have made a mistake while calculating sum3 and sum4:
sum3 += inputImage.at<float>(u,v) * lowPassX.at<float>(i - u*lowPassSize, j - v * lowPassSize);
sum4 += inputImage.at<float>(u, v) * lowPassY.at<float>(i - u*lowPassSize, j - v * lowPassSize);
instead of inputImage you should use a low pass filter.
I have an image in a Windows Phone app. There I have an array containing 10 values, depending on those values I am setting my contrast as
problem is when I am sliding forward it is perfect. But when reversing back image is not getting to original image.
I have already tried and researched a lot
Actually its a code belongs to code project site---
private double lastslidervalue;
private void sliderContrast_ValueChanged(object sender, MouseButtonEventArgs e)
{
if (sliderContrast == null) return;
double[] contrastArray = { 1, 1.2, 1.3, 1.6, 1.7, 1.9, 2.1, 2.4, 2.6, 2.9 };
int nIndex = (int)sliderContrast.Value-(int)this.lastslidervalue;
if (nIndex == -1)
{
int nIndex=this.lastslidervalue-sliderContrast.Value
this.lastslidervalue=sliderContrast.value
}
else
{
nIndex = (int)sliderContrast.Value-(int)this.lastslidervalue;
this.lastslidervalue=sliderContrast.value
}
double CFactor = contrastArray[nIndex];
WriteableBitmap wb;
wb = new WriteableBitmap(wbOriginal.PixelWidth, wbOriginal.PixelHeight);
//wb = new WriteableBitmap(imgOriginal);
wbOriginal.Pixels.CopyTo(wb.Pixels, 0);
int h = wb.PixelHeight;
int w = wb.PixelWidth;
for (int i = 0; i < wb.Pixels.Count(); i++)
{
int pixel = wb.Pixels[i];
int B = (int)(pixel & 0xFF); pixel >>= 8;
int G = (int)(pixel & 0xFF); pixel >>= 8;
int R = (int)(pixel & 0xFF); pixel >>= 8;
int A = (int)(pixel);
R = (int) Math.Max(0, Math.Min(255, (((R - 128) * CFactor) + 128)));
G = (int)Math.Max(0, Math.Min(255, (((G - 128) * CFactor) + 128)));
B = (int)Math.Max(0, Math.Min(255, (((B - 128) * CFactor) + 128)));
if (R > 255) R = 255; if (G > 255) G = 255; if (B > 255) B = 255;
if (R < 0) R = 0; if (G < 0) G = 0; if (B < 0) B = 0;
wb.Pixels[i] = B | (G << 8) | (R << 16) | (A << 24);
}
wb.Invalidate();
image1.Source = wb;
}
I am here using value changed event of slider....
after debugging I found that R value is continuosly decreasing after sliding forward.
ex--117,114,101,95 ,but after sliding backward R should be increased but,it is decreasing continuosly like after 95 it is 76....
Please help anybody.............
Your contrastArray has only positive values. When you're moving the slider back, you are always applying a positive change to your contrast, or crash your app.
double[] contrastArray = { 1, 1.2, 1.3, 1.6, 1.7, 1.9, 2.1, 2.4, 2.6, 2.9 };
This is all larger than 1.
Also, this can crash your app:
int nIndex = (int)sliderContrast.Value-(int)this.lastslidervalue;
this.lastslidervalue=sliderContrast.value
if (nIndex == -1) return;
because sometimes the change can be larger than one in the negative side. This means that you get nIndex smaller than -1 which then crashes here:
double CFactor = contrastArray[nIndex];
However, you could use this code to get what you want
private void sliderContrast_ValueChanged(object sender, RoutedPropertyChangedEventArgs<double> e)
{
if (sliderContrast == null) return;
double[] contrastArray = { 0.1, 0.2, 0.3, 0.4, 0.6, 1, 1.2, 1.3, 1.6, 1.7, 1.9, 2.1, 2.4, 2.6, 2.9};
int nIndex = Convert.ToInt32(sliderContrast.Value);
double CFactor = contrastArray[nIndex];
//.... the rest is the same
And if you set the starting value for the slider to be Value = 5 (do this in XAML) and make the maximum value a bit larger (to expand for the added values), then the change to the right and to the left should get the contrast higher or lower depending on the slider value. Works on my machine. :)