How does the OpenCV code for the Mixture of Gaussian Algorithm determine a pixel is foreground? - algorithm

I am trying to study the source code of the Mixture of Gaussian algorithm implemented in OpenCV 3.0 (the source code is in the contrib repository). I am focusing on the following code segment (retrieved from opencv_contrib-master\modules\bgsegm\src\bgfg_gaussmix.cpp, using the single-channel version process8uC1() as example):
for( y = 0; y < rows; y++ )
{
const uchar* src = image.ptr<uchar>(y);
uchar* dst = fgmask.ptr<uchar>(y);
// ... removed some code as I only consider learning rate within [0, 1]
for( x = 0; x < cols; x++, mptr += K )
{
float wsum = 0;
float pix = src[x];
int kHit = -1, kForeground = -1;
for( k = 0; k < K; k++ )
{
float w = mptr[k].weight;
wsum += w;
if( w < FLT_EPSILON )
break;
float mu = mptr[k].mean;
float var = mptr[k].var;
float diff = pix - mu;
float d2 = diff*diff;
if( d2 < vT*var )
{
wsum -= w;
float dw = alpha*(1.f - w);
mptr[k].weight = w + dw;
mptr[k].mean = mu + alpha*diff;
var = std::max(var + alpha*(d2 - var), minVar);
mptr[k].var = var;
mptr[k].sortKey = w/std::sqrt(var);
for( k1 = k-1; k1 >= 0; k1-- )
{
if( mptr[k1].sortKey >= mptr[k1+1].sortKey )
break;
std::swap( mptr[k1], mptr[k1+1] );
}
kHit = k1+1;
break;
}
}
if( kHit < 0 ) // no appropriate gaussian mixture found at all, remove the weakest mixture and create a new one
{
kHit = k = std::min(k, K-1);
wsum += w0 - mptr[k].weight;
mptr[k].weight = w0;
mptr[k].mean = pix;
mptr[k].var = var0;
mptr[k].sortKey = sk0;
}
else
for( ; k < K; k++ )
wsum += mptr[k].weight;
float wscale = 1.f/wsum;
wsum = 0;
for( k = 0; k < K; k++ )
{
wsum += mptr[k].weight *= wscale;
mptr[k].sortKey *= wscale;
if( wsum > T && kForeground < 0 )
kForeground = k+1;
}
dst[x] = (uchar)(-(kHit >= kForeground));
}
}
As far as I understand, the pixel is considered as foreground if kHit >= kForeground at the end of the code segment. And this may occur in 2 cases:
when kHit == -1 after the first for(k) loop. This is easy to interpret, as none of the Gaussian curves can fit the current pixel, i.e. all Gaussian curves treat the current pixel as foreground. Or
when kHit > -1 after the first for(k) loop, and kHit >= kForeground, where kForeground is found in the final for(k) loop.
What I do not understand are:
What is the physical interpretation of the condition if( wsum > T && kForeground < 0 )? I know that wsum is the running cumulative sum of the weights of the first k curves, and T is the background ratio set by user between [0, 1], but why do we need to calculate the cumulative sum of the weights and compare with the background ratio?
why do we need to sort the curves according to the descending order of sortKey, which is calculated as w/std::sqrt(var) (weight / standard deviation)? Why not simply sort according to the descending order of weight of the curves?

Related

how to calculate Otsu threshold in 1D

I'm trying to identify bimodal distributions in my analytical chemistry data. Each data set is a list of 3~70 retention times for a particular compound from the GC-MS. RTs for some compound are bimodally distributed where the library searches have assigned the same identity to two or more different features in the data with different RTs. This is quite common for isomers and other compound pairs with very similar mass spectra.
Eg. here's a histogram of RTs for one compound showing bimodal distribution.
I want to calculate the Otsu threshold to try and define bimodal data (there's also multimodal distributions but one step at a time). I'm struggling to understand the Wikipedia article on the calculations but the text indicates that the threshold can be found by finding the minimum intraclass variance. So I've tried computing this from a list of the RTs as follows:
a = list(d['Component RT'])
n = len(a)
b = [a.pop(0)]
varA = []
varB = []
for i in range(1,n-2):
b.append(a.pop(0))
varA.append(statistics.stdev(a)**2)
varB.append(statistics.stdev(b)**2)
Am I right in thinking that if I plot the sum of the variances for the above data I should be able to identify the Otsu threshold as the minimum?
In this example the threshold is obvious and there's about 35 values to work from. For most compounds there's fewer values (typically <15) and the data may be less well defined. Is this even the right threshold to use? The Wikipedia article on modality indicates a whole bunch of other tests for multimodality.
result is simillar to opencv thresh by OTSU.
uchar OTSU(const std::vector<uchar>& input_vec) {
// normalize input to 0-255 if needed
int count[256];
double u0, u1, u;
double pixelSum0, pixelSum1;
int n0, n1;
int bestThresold = 0, thresold = 0;
double w0, w1;
double variable = 0, maxVariable = 0;
for (int i = 0; i < 256; i++)
count[i] = 0;
for (int i = 0; i < input_vec.size(); i++) {
count[int(input_vec[i])] ++;
}
for (thresold = 0; thresold < 256; thresold++) {
n0 = 0;
n1 = 0;
w0 = 0;
w1 = 0;
pixelSum0 = 0;
pixelSum1 = 0;
for (int i = 0; i < thresold; i++) {
n0 += count[i];
pixelSum0 += i * count[i];
}
for (int i = thresold; i < 256; i++) {
n1 += count[i];
pixelSum1 += i * count[i];
}
w0 = double(n0) / (input_vec.size());
w1 = double(n1) / (input_vec.size());
u0 = pixelSum0 / n0;
u1 = pixelSum1 / n1;
u = u0 * w0 + u1 * w1;
variable = w0 * pow((u0 - u), 2) + w1 * pow((u1 - u), 2);
if (variable > maxVariable) {
maxVariable = variable;
bestThresold = thresold;
}
}
return bestThresold;}
ref :https://github.com/1124418652/edge_extract/blob/master/edge_extract/OTSU.cpp

Obtaining orientation map of fingerprint image using OpenCV

I'm trying to implement the method of improving fingerprint images by Anil Jain. As a starter, I encountered some difficulties while extracting the orientation image, and am strictly following those steps described in Section 2.4 of that paper.
So, this is the input image:
And this is after normalization using exactly the same method as in that paper:
I'm expecting to see something like this (an example from the internet):
However, this is what I got for displaying obtained orientation matrix:
Obviously this is wrong, and it also gives non-zero values for those zero points in the original input image.
This is the code I wrote:
cv::Mat orientation(cv::Mat inputImage)
{
cv::Mat orientationMat = cv::Mat::zeros(inputImage.size(), CV_8UC1);
// compute gradients at each pixel
cv::Mat grad_x, grad_y;
cv::Sobel(inputImage, grad_x, CV_16SC1, 1, 0, 3, 1, 0, cv::BORDER_DEFAULT);
cv::Sobel(inputImage, grad_y, CV_16SC1, 0, 1, 3, 1, 0, cv::BORDER_DEFAULT);
cv::Mat Vx, Vy, theta, lowPassX, lowPassY;
cv::Mat lowPassX2, lowPassY2;
Vx = cv::Mat::zeros(inputImage.size(), inputImage.type());
Vx.copyTo(Vy);
Vx.copyTo(theta);
Vx.copyTo(lowPassX);
Vx.copyTo(lowPassY);
Vx.copyTo(lowPassX2);
Vx.copyTo(lowPassY2);
// estimate the local orientation of each block
int blockSize = 16;
for(int i = blockSize/2; i < inputImage.rows - blockSize/2; i+=blockSize)
{
for(int j = blockSize / 2; j < inputImage.cols - blockSize/2; j+= blockSize)
{
float sum1 = 0.0;
float sum2 = 0.0;
for ( int u = i - blockSize/2; u < i + blockSize/2; u++)
{
for( int v = j - blockSize/2; v < j+blockSize/2; v++)
{
sum1 += grad_x.at<float>(u,v) * grad_y.at<float>(u,v);
sum2 += (grad_x.at<float>(u,v)*grad_x.at<float>(u,v)) * (grad_y.at<float>(u,v)*grad_y.at<float>(u,v));
}
}
Vx.at<float>(i,j) = sum1;
Vy.at<float>(i,j) = sum2;
double calc = 0.0;
if(sum1 != 0 && sum2 != 0)
{
calc = 0.5 * atan(Vy.at<float>(i,j) / Vx.at<float>(i,j));
}
theta.at<float>(i,j) = calc;
// Perform low-pass filtering
float angle = 2 * calc;
lowPassX.at<float>(i,j) = cos(angle * pi / 180);
lowPassY.at<float>(i,j) = sin(angle * pi / 180);
float sum3 = 0.0;
float sum4 = 0.0;
for(int u = -lowPassSize / 2; u < lowPassSize / 2; u++)
{
for(int v = -lowPassSize / 2; v < lowPassSize / 2; v++)
{
sum3 += inputImage.at<float>(u,v) * lowPassX.at<float>(i - u*lowPassSize, j - v * lowPassSize);
sum4 += inputImage.at<float>(u, v) * lowPassY.at<float>(i - u*lowPassSize, j - v * lowPassSize);
}
}
lowPassX2.at<float>(i,j) = sum3;
lowPassY2.at<float>(i,j) = sum4;
float calc2 = 0.0;
if(sum3 != 0 && sum4 != 0)
{
calc2 = 0.5 * atan(lowPassY2.at<float>(i, j) / lowPassX2.at<float>(i, j)) * 180 / pi;
}
orientationMat.at<float>(i,j) = calc2;
}
}
return orientationMat;
}
I've already searched a lot on the web, but almost all of them are in Matlab. And there exist very few ones using OpenCV, but they didn't help me either. I sincerely hope someone could go through my code and point out any error to help. Thank you in advance.
Update
Here are the steps that I followed according to the paper:
Obtain normalized image G.
Divide G into blocks of size wxw (16x16).
Compute the x and y gradients at each pixel (i,j).
Estimate the local orientation of each block centered at pixel (i,j) using equations:
Perform low-pass filtering to remove noise. For that, convert the orientation image into a continuous vector field defined as:
where W is a two-dimensional low-pass filter, and w(phi) x w(phi) is its size, which equals to 5.
Finally, compute the local ridge orientation at (i,j) using:
Update2
This is the output of orientationMat after changing the mat type to CV_16SC1 in Sobel operation as Micka suggested:
Maybe it's too late for me to answer, but anyway somebody could read this later and solve the same problem.
I've been working for a while in the same algorithm, same method you posted... But there's some writting errors when the papper was redacted (I guess). After fighting a lot with the equations I found this errors by looking other similar works.
Here is what worked for me...
Vy(i, j) = 2*dx(u,v)*dy(u,v)
Vx(i,j) = dx(u,v)^2 - dy(u,v)^2
O(i,j) = 0.5*arctan(Vy(i,j)/Vx(i,j)
(Excuse me I wasn't able to post images, so I wrote the modified ecuations. Remeber "u" and "v" are positions of the summation across the BlockSize by BlockSize window)
The first thing and most important (obviously) are the equations, I saw that in different works this expressions were really different and in every one they talked about the same algorithm of Hong et al.
The Key is finding the Least Mean Square (First 3 equations) of the gradients (Vx and Vy), I provided the corrected formulas above for this ation. Then you can compute angle theta for the non overlapping window (16x16 size recommended in the papper), after that the algorithm says you must calculate the magnitud of the doubled angle in "x" and "y" directions (Phi_x and Phi_y).
Phi_x(i,j) = V(i,j) * cos(2*O(i,j))
Phi_y(i,j) = V(i,j) * sin(2*O(i,j))
Magnitud is just:
V = sqrt(Vx(i,j)^2 + Vy(i,j)^2)
Note that in the related work doesn't mention that you have to use the gradient magnitud, but it make sense (for me) in doing it. After all this corrections you can apply the low pass filter to Phi_x and Phi_y, I used a simple Mask of size 5x5 to average this magnitudes (something like medianblur() of opencv).
Last thing is to calculate new angle, that is the average of the 25ith neighbors in the O(i,j) image, for this you just have to:
O'(i,j) = 0.5*arctan(Phi_y/Phi_x)
We're just there... All this just for calculating the angle of the NORMAL VECTOR TO THE RIDGES DIRECTIONS (O'(i,j)) in the BlockSize by BlockSize non overlapping window, what does it mean? it means that the angle we just calculated is perpendicular to the ridges, in simple words we just calculated the angle of the riges plus 90 degrees... To get the angle we need, we just have to substract to the obtained angle 90°.
To draw the lines we need to have an initial point (X0, Y0) and a final point(X1, Y1). For that imagine a circle centered on (X0, Y0) with a radious of "r":
x0 = i + blocksize/2
y0 = j + blocksize/2
r = blocksize/2
Note we add i and j to the first coordinates becouse the window is moving and we are gonna draw the line starting from the center of the non overlaping window, so we can't use just the center of the non overlaping window.
Then to calculate the end coordinates to draw a line we can just have to use a right triangle so...
X1 = r*cos(O'(i,j)-90°)+X0
Y1 = r*sin(O'(i,j)-90°)+Y0
X2 = X0-r*cos(O'(i,j)-90°)
Y2 = Y0-r*cos(O'(i,j)-90°)
Then just use opencv line function, where initial Point is (X0,Y0) and final Point is (X1, Y1). Additional to it, I drawed the windows of 16x16 and computed the oposite points of X1 and Y1 (X2 and Y2) to draw a line of the entire window.
Hope this help somebody.
My results...
Main function:
Mat mat = imread("nwmPa.png",0);
mat.convertTo(mat, CV_32F, 1.0/255, 0);
Normalize(mat);
int blockSize = 6;
int height = mat.rows;
int width = mat.cols;
Mat orientationMap;
orientation(mat, orientationMap, blockSize);
Normalize:
void Normalize(Mat & image)
{
Scalar mean, dev;
meanStdDev(image, mean, dev);
double M = mean.val[0];
double D = dev.val[0];
for(int i(0) ; i<image.rows ; i++)
{
for(int j(0) ; j<image.cols ; j++)
{
if(image.at<float>(i,j) > M)
image.at<float>(i,j) = 100.0/255 + sqrt( 100.0/255*pow(image.at<float>(i,j)-M,2)/D );
else
image.at<float>(i,j) = 100.0/255 - sqrt( 100.0/255*pow(image.at<float>(i,j)-M,2)/D );
}
}
}
Orientation map:
void orientation(const Mat &inputImage, Mat &orientationMap, int blockSize)
{
Mat fprintWithDirectionsSmoo = inputImage.clone();
Mat tmp(inputImage.size(), inputImage.type());
Mat coherence(inputImage.size(), inputImage.type());
orientationMap = tmp.clone();
//Gradiants x and y
Mat grad_x, grad_y;
// Sobel(inputImage, grad_x, CV_32F, 1, 0, 3, 1, 0, BORDER_DEFAULT);
// Sobel(inputImage, grad_y, CV_32F, 0, 1, 3, 1, 0, BORDER_DEFAULT);
Scharr(inputImage, grad_x, CV_32F, 1, 0, 1, 0);
Scharr(inputImage, grad_y, CV_32F, 0, 1, 1, 0);
//Vector vield
Mat Fx(inputImage.size(), inputImage.type()),
Fy(inputImage.size(), inputImage.type()),
Fx_gauss,
Fy_gauss;
Mat smoothed(inputImage.size(), inputImage.type());
// Local orientation for each block
int width = inputImage.cols;
int height = inputImage.rows;
int blockH;
int blockW;
//select block
for(int i = 0; i < height; i+=blockSize)
{
for(int j = 0; j < width; j+=blockSize)
{
float Gsx = 0.0;
float Gsy = 0.0;
float Gxx = 0.0;
float Gyy = 0.0;
//for check bounds of img
blockH = ((height-i)<blockSize)?(height-i):blockSize;
blockW = ((width-j)<blockSize)?(width-j):blockSize;
//average at block WхW
for ( int u = i ; u < i + blockH; u++)
{
for( int v = j ; v < j + blockW ; v++)
{
Gsx += (grad_x.at<float>(u,v)*grad_x.at<float>(u,v)) - (grad_y.at<float>(u,v)*grad_y.at<float>(u,v));
Gsy += 2*grad_x.at<float>(u,v) * grad_y.at<float>(u,v);
Gxx += grad_x.at<float>(u,v)*grad_x.at<float>(u,v);
Gyy += grad_y.at<float>(u,v)*grad_y.at<float>(u,v);
}
}
float coh = sqrt(pow(Gsx,2) + pow(Gsy,2)) / (Gxx + Gyy);
//smoothed
float fi = 0.5*fastAtan2(Gsy, Gsx)*CV_PI/180;
Fx.at<float>(i,j) = cos(2*fi);
Fy.at<float>(i,j) = sin(2*fi);
//fill blocks
for ( int u = i ; u < i + blockH; u++)
{
for( int v = j ; v < j + blockW ; v++)
{
orientationMap.at<float>(u,v) = fi;
Fx.at<float>(u,v) = Fx.at<float>(i,j);
Fy.at<float>(u,v) = Fy.at<float>(i,j);
coherence.at<float>(u,v) = (coh<0.85)?1:0;
}
}
}
} ///for
GaussConvolveWithStep(Fx, Fx_gauss, 5, blockSize);
GaussConvolveWithStep(Fy, Fy_gauss, 5, blockSize);
for(int m = 0; m < height; m++)
{
for(int n = 0; n < width; n++)
{
smoothed.at<float>(m,n) = 0.5*fastAtan2(Fy_gauss.at<float>(m,n), Fx_gauss.at<float>(m,n))*CV_PI/180;
if((m%blockSize)==0 && (n%blockSize)==0){
int x = n;
int y = m;
int ln = sqrt(2*pow(blockSize,2))/2;
float dx = ln*cos( smoothed.at<float>(m,n) - CV_PI/2);
float dy = ln*sin( smoothed.at<float>(m,n) - CV_PI/2);
arrowedLine(fprintWithDirectionsSmoo, Point(x, y+blockH), Point(x + dx, y + blockW + dy), Scalar::all(255), 1, CV_AA, 0, 0.06*blockSize);
// qDebug () << Fx_gauss.at<float>(m,n) << Fy_gauss.at<float>(m,n) << smoothed.at<float>(m,n);
// imshow("Orientation", fprintWithDirectionsSmoo);
// waitKey(0);
}
}
}///for2
normalize(orientationMap, orientationMap,0,1,NORM_MINMAX);
imshow("Orientation field", orientationMap);
orientationMap = smoothed.clone();
normalize(smoothed, smoothed, 0, 1, NORM_MINMAX);
imshow("Smoothed orientation field", smoothed);
imshow("Coherence", coherence);
imshow("Orientation", fprintWithDirectionsSmoo);
}
seems nothing forgot )
I have read your code thoroughly and found that you have made a mistake while calculating sum3 and sum4:
sum3 += inputImage.at<float>(u,v) * lowPassX.at<float>(i - u*lowPassSize, j - v * lowPassSize);
sum4 += inputImage.at<float>(u, v) * lowPassY.at<float>(i - u*lowPassSize, j - v * lowPassSize);
instead of inputImage you should use a low pass filter.

I made a processing program that generates a mandelbrot set but don't know how to effectively implement a zoom method

I'm not sure if it is possible in processing but I would like to be able to zoom in on the fractal without it being extremely laggy and buggy. What I currently have is:
int maxIter = 100;
float zoom = 1;
float x0 = width/2;
float y0 = height/2;
void setup(){
size(500,300);
noStroke();
smooth();
}
void draw(){
translate(x0, y0);
scale(zoom);
for(float Py = 0; Py < height; Py++){
for(float Px = 0; Px < width; Px++){
// scale pixel coordinates to Mandelbrot scale
float w = width;
float h = height;
float xScaled = (Px * (3.5/w)) - 2.5;
float yScaled = (Py * (2/h)) - 1;
float x = 0;
float y = 0;
int iter = 0;
while( x*x + y*y < 2*2 && iter < maxIter){
float tempX = x*x - y*y + xScaled;
y = 2*x*y + yScaled;
x = tempX;
iter += 1;
}
// color pixels
color c;
c = pickColor(iter);
rect(Px, Py,1,1);
fill(c);
}
}
}
// pick color based on time pixel took to escape (number of iterations through loop)
color pickColor(int iters){
color b = color(0,0,0);
if(iters == maxIter) return b;
int l = 1;
color[] colors = new color[maxIter];
for(int i = 0; i < colors.length; i++){
switch(l){
case 1 : colors[i] = color(255,0,0); break;
case 2 : colors[i] = color(0,0,255); break;
case 3 : colors[i] = color(0,255,0); break;
}
if(l == 1 || l == 2) l++;
else if(l == 3) l = 1;
else l--;
}
return colors[iters];
}
// allow zooming in and out
void mouseWheel(MouseEvent event){
float direction = event.getCount();
if(direction < 0) zoom += .02;
if(direction > 0) zoom -= .02;
}
// allow dragging back and forth to change view
void mouseDragged(){
x0+= mouseX-pmouseX;
y0+= mouseY-pmouseY;
}
but it doesn't work very well. It works alright at the size and max iteration I have it set to now (but still not well) and is completely unusable at larger sizes or higher maximum iterations.
The G4P library has an example that does exactly this. Download the library and go to the G4P_MandelBrot example. The example can be found online here.
Hope this helps!

GL_TRIANGLE_STRIP - Generating a grid for a single draw call (degenerate triangles)

I need to create a grid ready for GL_TRIANGLE_STRIP rendering.
My grid is just a :
Position ( self explanatory )
Size ( the x,y size )
Resolution ( the spacing between each vertex in x,y )
Here is the method used to create verts/indices and return them:
int iCols = vSize.x / vResolution.x;
int iRows = vSize.y / vResolution.y;
// Create Vertices
for(int y = 0; y < iRows; y ++)
{
for(int x = 0; x < iCols; x ++)
{
float startu = (float)x / (float)vSize.x;
float startv = (float)y / (float)vSize.y;
tControlVertex.Color = vColor;
tControlVertex.Position = CVector3(x * vResolution.x,y * vResolution.y,0);
tControlVertex.TexCoord = CVector2(startu, startv - 1.0 );
vMeshVertices.push_back(tControlVertex);
}
}
// Create Indices
rIndices.clear();
for (int r = 0; r < iRows - 1; r++)
{
rIndices.push_back(r*iCols);
for (int c = 0; c < iCols; c++)
{
rIndices.push_back(r*iCols+c);
rIndices.push_back((r+1)*iCols+c);
}
rIndices.push_back((r + 1) * iCols + (iCols - 1));
}
And to visualise that, few examples first.
1) Size 512x512 Resolution 64x64, so it should be made of 8 x 8 quads, but i get 7x7 only
2) Size 512x512 Resolution 128x128, so it should be made of 4 x 4 quads, but i get 3x3 only
3) Size 128x128 Resolution 8x8 so it should be made of 16 x 16 quads but i get 15x15 only
So as you can see, i am missing the Last Row and Last Column somewhere. Where am I going wrong?
Short answer: make your for-loop test <=, as compared to just <, when you're generating your vertices and indices.
The issue is that the computation of your iRows and iCols variables is counting the number of primitives of a particular resolution for a range of pixels; call that n. For a triangle strip of n primtives, you need n+2 primitives, so you're just missing the last "row" and "column" of vertices.
The index generation loop needs to be:
for ( int r = 0; r <= iRows; ++r ) {
for ( int c = 0; c <= iCols; ++c ) {
rIndices.push_back( c + r + r*iCols );
rIndices.push_back( c + r + (r+1)*iCols + 1 );
}
}

Image rotation algorithm for a square pixel grid

I'm currently working on my own little online pixel editor.
Now I'm trying to add a rotation function.
But I can't quite figure out how to realize it.
Here is the basic query for my pixel grid:
for (var y = 0;y < pixelAmount;y++) {
for (var x = 0;x < pixelAmount;x++) {
var name = y + "x" + x;
newY = ?? ;
newX = ?? ;
if ($(newY + "x" + newX).style.backgroundColor != "rgb(255, 255, 255)")
{ $(name).style.backgroundColor = $(newY + "x" + newX).style.backgroundColor; }
}
}
How do I calculate newY and newX?
How do you rotate a two dimensional array?
from this^ post I got this method (in c#):
int a[4][4];
int n=4;
int tmp;
for (int i=0; i<n/2; i++){
for (int j=i; j<n-i-1; j++){
tmp=a[i][j];
a[i][j]=a[j][n-i-1];
a[j][n-i-1]=a[n-i-1][n-j-1];
a[n-i-1][n-j-1]=a[n-j-1][i];
a[n-j-1][i]=tmp;
}
}
or this one:
int[,] array = new int[4,4] {
{ 1,2,3,4 },
{ 5,6,7,8 },
{ 9,0,1,2 },
{ 3,4,5,6 }
};
int[,] rotated = RotateMatrix(array, 4);
static int[,] RotateMatrix(int[,] matrix, int n) {
int[,] ret = new int[n, n];
for (int i = 0; i < n; ++i) {
for (int j = 0; j < n; ++j) {
ret[i, j] = matrix[n - j - 1, i];
}
}
return ret;
}
the first method doesn't use a second array (/matrix) to save memory..
Take a look at this doc (Section 3: Rotating a bitmap with an angle of any value). It walks you through how to do the math and gives you some sample code (C++, but it should be good enough for what you need).
If very quick performance is not of huge importance (which is the case by default), you can consider rotating the picture clockwise by flipping it against the main diagonal and then horizontally. To rotate counterclockwise, flip horizontally, then against the main diagonal. The code is much simpler.
For diagonal flip you exchange the values of image[x,y] with image[y,x] in a loop like this
for( var x = 0; x < pixelAmount; ++x )
for( var y = x + 1; y < pixelAmount; ++y )
swap(image[x,y],image[y,x]);
For horizontal flip you do something like
for( var y = 0; y < pixelAmount; ++y )
{
i = 0; j = pixelAmount - 1;
while( i < j ) {
swap( image[i,y], image[j,y] );
++i; --j;
}
}

Resources