PIX* returnRotatedImage(PIX* image, float theta)
{
PIX* rotated = pixRotate(image, -theta, L_ROTATE_AREA_MAP, L_BRING_IN_BLACK, image->w, image->h);
return rotated;
}
When I execute the above code on an image, the resulting image has the edges cut off.
Example: the original scan, followed by the image after being run through the above function to rotate it by ~89 degrees.
I don't have 10 reputation yet, so I can't embed the images, but here's a link to the two pictures: http://imgur.com/a/y7wAn
I need it to work for arbitrary angles as well (not just angles close to 90 degrees), so unfortunately the solution presented here won't work.
The description for the pixRotate function says:
* (6) The dest can be expanded so that no image pixels
* are lost. To invoke expansion, input the original
* width and height. For repeated rotation, use of the
* original width and height allows the expansion to
* stop at the maximum required size, which is a square
* with side = sqrt(w*w + h*h).
however it seems to be expanding the destination after rotation, and thus the pixels are lost, even if the final image size is correct. If I use pixRotate(..., 0, 0) instead of pixRotate(..., w, h), I end up with the image rotated within the original dimensions: http://i.imgur.com/YZSETl5.jpg.
Am I interpreting the pixRotate function description incorrectly? Is what I want to do even possible? Or is this possibly a bug?
Thanks in advance.
Related
(More info at end)----->
I am trying to render a small picture-in-picture display over my scene. The PiP is just a smaller texture, but it is intended to reveal secret objects in the scene when it is placed over them.
To do this, I want to render my scene, then render the SAME scene on the smaller texture, but with the exact same positioning as the main scene. The intended result would be something like this:
My problem is... I cannot get the scene on the smaller texture to match up 1:1. I keep trying various kludges, but ultimately I suspect that I need to do something to the projection matrix to pan it over to the location of the frame. I can get it to zoom correctly...just can't get it to pan.
Can anyone suggest what I need to do to my projection matrix to render my scene 1:1 (but panned by x,y) onto a smaller texture?
The data I have:
Resolution of the full-screen framebuffer
Resolution of the smaller texture
XY coordinate where I want to draw the smaller texture as an overlay sprite
The world/view/projection matrices from the original full-screen scene
The viewport from the original full-screen scene
(Edit)
Here is the function I use to produce the 3D camera:
void Make3DCamera(Vector theCameraPos, Vector theLookAt, Vector theUpVector, float theFOV, Point theRez, Matrix& theViewMatrix,Matrix& theProjectionMatrix)
{
Matrix aCombinedViewMatrix;
Matrix aViewMatrix;
aCombinedViewMatrix.Scale(1,1,-1);
theCameraPos.mZ*=-1;
theLookAt.mZ*=-1;
theUpVector.mZ*=-1;
aCombinedViewMatrix.Translate(-theCameraPos);
Vector aLookAtVector=theLookAt-theCameraPos;
Vector aSideVector=theUpVector.Cross(aLookAtVector);
theUpVector=aLookAtVector.Cross(aSideVector);
aLookAtVector.Normalize();
aSideVector.Normalize();
theUpVector.Normalize();
aViewMatrix.mData.m[0][0] = -aSideVector.mX;
aViewMatrix.mData.m[1][0] = -aSideVector.mY;
aViewMatrix.mData.m[2][0] = -aSideVector.mZ;
aViewMatrix.mData.m[3][0] = 0;
aViewMatrix.mData.m[0][1] = -theUpVector.mX;
aViewMatrix.mData.m[1][1] = -theUpVector.mY;
aViewMatrix.mData.m[2][1] = -theUpVector.mZ;
aViewMatrix.mData.m[3][1] = 0;
aViewMatrix.mData.m[0][2] = aLookAtVector.mX;
aViewMatrix.mData.m[1][2] = aLookAtVector.mY;
aViewMatrix.mData.m[2][2] = aLookAtVector.mZ;
aViewMatrix.mData.m[3][2] = 0;
aViewMatrix.mData.m[0][3] = 0;
aViewMatrix.mData.m[1][3] = 0;
aViewMatrix.mData.m[2][3] = 0;
aViewMatrix.mData.m[3][3] = 1;
if (gG.mRenderToSprite) aViewMatrix.Scale(1,-1,1);
aCombinedViewMatrix*=aViewMatrix;
// Projection Matrix
float aAspect = (float) theRez.mX / (float) theRez.mY;
float aNear = gG.mZRange.mData1;
float aFar = gG.mZRange.mData2;
float aWidth = gMath.Cos(theFOV / 2.0f);
float aHeight = gMath.Cos(theFOV / 2.0f);
if (aAspect > 1.0) aWidth /= aAspect;
else aHeight *= aAspect;
float s = gMath.Sin(theFOV / 2.0f);
float d = 1.0f - aNear / aFar;
Matrix aPerspectiveMatrix;
aPerspectiveMatrix.mData.m[0][0] = aWidth;
aPerspectiveMatrix.mData.m[1][0] = 0;
aPerspectiveMatrix.mData.m[2][0] = gG.m3DOffset.mX/theRez.mX/2;
aPerspectiveMatrix.mData.m[3][0] = 0;
aPerspectiveMatrix.mData.m[0][1] = 0;
aPerspectiveMatrix.mData.m[1][1] = aHeight;
aPerspectiveMatrix.mData.m[2][1] = gG.m3DOffset.mY/theRez.mY/2;
aPerspectiveMatrix.mData.m[3][1] = 0;
aPerspectiveMatrix.mData.m[0][2] = 0;
aPerspectiveMatrix.mData.m[1][2] = 0;
aPerspectiveMatrix.mData.m[2][2] = s / d;
aPerspectiveMatrix.mData.m[3][2] = -(s * aNear / d);
aPerspectiveMatrix.mData.m[0][3] = 0;
aPerspectiveMatrix.mData.m[1][3] = 0;
aPerspectiveMatrix.mData.m[2][3] = s;
aPerspectiveMatrix.mData.m[3][3] = 0;
theViewMatrix=aCombinedViewMatrix;
theProjectionMatrix=aPerspectiveMatrix;
}
Edit to add more information:
Just playing and tweaking numbers, I have come to a "close" result. However the "close" result requires a multiplication by some kludge numbers, that I don't understand.
Here's what I'm doing to to perspective matrix to produce my close result:
//Before calling Make3DCamera, adjusting FOV:
aFOV*=smallerTexture.HeightF()/normalRenderSize.HeightF(); // Zoom it
aFOV*=1.02f // <- WTH is this?
//Then, to pan the camera over to the x/y position I want, I do:
Matrix aPM=GetCurrentProjectionMatrix();
float aX=(screenX-normalRenderSize.WidthF()/2.0f)/2.0f;
float aY=(screenY-normalRenderSize.HeightF()/2.0f)/2.0f;
aX*=1.07f; // <- WTH is this?
aY*=1.07f; // <- WTH is this?
aPM.mData.m[2][0]=-aX/normalRenderSize.HeightF();
aPM.mData.m[2][1]=-aY/normalRenderSize.HeightF();
SetCurrentProjectionMatrix(aPM);
When I do this, my new picture is VERY close... but not exactly perfect-- the small render tends to drift away from "center" the further the "magic window" is from the center. Without the kludge number, the drift away from center with the magic window is very pronounced.
The kludge numbers 1.02f for zoom and 1.07 for pan reduce the inaccuracies and drift to a fraction of a pixel, but those numbers must be a ratio from somewhere, right? They work at ANY RESOLUTION, though-- so I have have a 1280x800 screen and a 256,256 magic window texture... if I change the screen to 1024x768, it all still works.
Where the heck are these numbers coming from?
If you don't care about sub-optimal performance (i.e., drawing the whole scene twice) and if you don't need the smaller scene in a texture, an easy way to obtain the overlay with pixel perfect precision is:
Set up main scene (model/view/projection matrices, etc.) and draw it as you are now.
Use glScissor to set the rectangle for the overlay. glScissor takes the screen-space x, y, width, and height and discards anything outside that rectangle. It looks like you have those four data items already, so you should be good to go.
Call glEnable(GL_SCISSOR_TEST) to actually turn on the test.
Set the shader variables (if you're using shaders) for drawing the greyscale scene/hidden objects/etc. You still use the same view and projection matrices that you used for the main scene.
Draw the greyscale scene/hidden objects/etc.
Call glDisable(GL_SCISSOR_TEST) so you won't be scissoring at the start of the next frame.
Draw the red overlay border, if desired.
Now, if you actually need the overlay in its own texture for some reason, this probably won't be adequate...it could be made to work either with framebuffer objects and/or pixel readback, but this would be less efficient.
Most people completely overcomplicate such issues. There is absolutely no magic to applying transformations after applying the projection matrix.
If you have a projection matrix P (and I'm assuming default OpenGL conventions here where P is constructed in a way that the vector is post-multiplied to the matrix, so for an eye space vector v_eye, we get v_clip = P * v_eye), you can simply pre-multiply some other translate and scale transforms to cut out any region of interest.
Assume you have a viewport of size w_view * h_view pixels, and you want to find a projection matrix which renders only a tile w_tile * h_tile pixels , beginning at pixel location (x_tile, y_tile) (again, assuming default GL conventions here, window space origin is bottom left, so y_tile is measured from the bottom). Also note that the _tile coordinates are to be interpreted relative to the viewport, in the typical case, that would start at (0,0) and have the size of your full framebuffer, but this is by no means required nor assumed here.
Since after applying the projection matrix we are in clip space, we need to transform our coordinates from window space pixels to clip space. Note that clip space is a 4D homogeneous space, but we can use any w value we like (except 0) to represent any point (as a point in the 3D space we care about forms a line in the 4D space we work in), so let's just use w=1 for simplicity's sake.
The view volume in clip space is denoted by the [-w,w] range, so in the w=1 hyperplane, it is [-1,1]. Converting our tile into this space yields:
x_clip = 2 * (x_tile / w_view) -1
y_clip = 2 * (y_tile / h_view) -1
w_clip = 2 * (w_tile / w_view) -1
h_clip = 2 * (h_tile / h_view) -1
We now just need to translate the objects such that the center of the tile is moved to the center of the view volume, which by definition is the origin, and scale the w_clip * h_clip sized region to the full [-1,1] extent in each dimension.
That means:
T = translate(-(x_clip + 0.5*w_clip), -(y_clip + 0.5 *h_clip), 0)
S = scale(2.0/w_clip, 2.0/h_clip, 1.0)
We can now create the modified projection matrix P' as P' = S * T * P, and that's all there is. Rendering with P' instead of P will render exactly the region of your tile to whatever viewport you are using, so for it to be pixel-exact with respect to your original viewport, you must now render with a viewport which is also w_tile * h_tile pixels big.
Note that there is also another approach: The viewport is not clamped against the framebuffer you're rendering to. It is actually valid to provide negative values for x and y. If your framebuffer for rendering your tile into is exactly w_tile * h_tile pixels, you simply could set glViewport(-x_tile, -y_tile, x_tile + w_tile, y_tile + h_tile) and render with the unmodified projection matrix P instead.
Note:
The question is specific to PHP GD library only
This question is NOT about how to crop image to a target aspect ratio, rather it is about how to draw overlay extending outside the image
I want to create custom graphic by putting together a background image with a polygon and some texts.
Input background images are of varied dimensions and aspect-ratios, but the final graphic has to be of a fixed (2:1) aspect ratio (it also has to be of pre-defined dimensions, but resizing of image is trivial, so correct aspect ratio is my only target).
Presently I'm cropping-to-fit my input image to target aspect ratio (2:1) by performing max-center area cropping using imagecrop function. Thereafter I draw red polygon on it as shown below (ignore the texts drawn on red band) using imagefilledpolygon method [cropping screenshot below is for demonstration purpose only, it is actually being done programmatically via imagecrop function]
Here's my function that draws the overlay (this function is called after cropping of image to 2:1 aspect ratio is done)
/**
* adds overlay (colored band) on the image
* for better output, overlay must be added before resizing
*/
public function withOverlay(): NotifAdsCreativeGenerator {
// Prepare custom red-color Hex to RGB https://stackoverflow.com/a/15202130/3679900
list($r, $g, $b) = sscanf(self::OVERLAY_COLOR, "#%02x%02x%02x");
$custom_red_color = imagecolorallocate($this->getCrrImage(), $r, $g, $b);
// prepare coordinates for polygon
$coords = [
[0, 0],
[(int) ($this->getCrrWidth() * self::OVERLAY_BEGIN_X_RATIO), 0],
[(int) ($this->getCrrWidth() * self::OVERLAY_END_X_RATIO), $this->getCrrHeight()],
[0, $this->getCrrHeight()]
];
$flattened_coords = array_merge(...$coords);
// draw polygon on image
imagefilledpolygon($this->getCrrImage(), $flattened_coords, count($flattened_coords) / 2, $custom_red_color);
return $this;
}
But what I want is to crop the image to ~ 1.28:1 aspect ratio (the approx ratio of right part of graphic without the red band) and then draw the polygon (extending) outside the image so as to obtain the final graphic in the same same 2:1 aspect ratio as shown below
I'm able to crop image to my desired aspect ratio (1.28:1) but I can't figure out a way to draw the polygon outside the image bounds (effectively expanding the image in the process). Is there a way to do this using PHP-GD library?
It was just a lack of understanding (about working of PHP-GD, available methods) on my part, but the solution is pretty simple
create an empty 'canvas' image of desired dimensions (and the 2:1 target aspect ratio) using imagecreatetruecolor function
(after cropping), copy the image on right side of canvas using imagecopy method (some basic maths has to be done to determine the offset where the image has to be placed on canvas)
now as before, the red polygon can be drawn on the left side on canvas to obtain the final graphic
/**
* adds overlay (colored band) on the image
* for better output, overlay must be added before resizing
*
* This method tries to preserve maximum center region of input image by performing minCenterCrop(.) on it
* before drawing an overlay that extends beyond left border of the cropped image
*
* (since this method incorporates call to 'withMinCenterCrop', calling that method before this is not required
* (and is redundant). For benefits of this method over 'withOverlay', read docstring comment of
* 'withMinCenterCrop' method
* #return NotifAdsCreativeGenerator
*/
public function withExtendedOverlay(): NotifAdsCreativeGenerator {
// perform min center crop to the 1.28:1 aspect ratio (preserve max central portion of image)
$this->withMinCenterCrop();
// this $required_canvas_aspect_ratio calculates to 2.0 (2:1 aspect ratio)
// calculate aspect ratio & dimensions of empty 'canvas' image
// since canvas is wider than min center-cropped image (as space on the left will be occupied by red overlay)
// therefore it's height is matched with cropped image and width is calculated
$required_canvas_aspect_ratio = self::IMAGE_WIDTH / self::IMAGE_HEIGHT;
// height of cropped image
$canvas_height = $this->getCrrHeight();
$canvas_width = $required_canvas_aspect_ratio * $canvas_height;
// create a new 'canvas' (empty image) on which we will
// 1. draw the existing input 'min-cropped' image on the right
// 2. draw the red overlay on the left
$canvas_image = imagecreatetruecolor($canvas_width, $canvas_height);
// copy contents of image on right side of canvas
imagecopy(
$canvas_image,
// cropped image
$this->getCrrImage(),
self::OVERLAY_BEGIN_X_RATIO * $canvas_width,
0,
0,
0,
// dimensions of cropped image
$this->getCrrWidth(),
$this->getCrrHeight()
);
// draw red band overlay on left side of canvas
$this->crr_image = $canvas_image;
return $this->withOverlay();
}
I have a z-image from a ToF Camera (Kinect V2). I do not have the pixel size, but I know that the depth image has a resolution of 512x424. I also know that I have a fov of 70.6x60 degrees.
I asked how to get the Pixel size before here. In Matlab this code looks like the following.
The brighter the pixel, the closer the object.
close all
clear all
%Load image
depth = imread('depth_0_30_0_0.5.png');
frame_width = 512;
frame_height = 424;
horizontal_scaling = tan((70.6 / 2) * (pi/180));
vertical_scaling = tan((60 / 2) * (pi/180));
%pixel size
with_size = horizontal_scaling * 2 .* (double(depth)/frame_width);
height_size = vertical_scaling * 2 .* (double(depth)/frame_height);
The image itself is a cube rotated by 30 degree, and can be seen here: .
What I want to do now is calculate the horizontal angle of a pixel to the camera-plane and the vertical angle to the camera plane.
I tried to do this with triangulation, I calculate the z-distance from one pixel to another, first in the horizontal direction and then in the vertical direction. I do this with a convolution:
%get the horizontal errors
dx = abs(conv2(depth,[1 -1],'same'));
%get the vertical errors
dy = abs(conv2(depth,[1 -1]','same'));
After this I calculate it via the atan, like this:
horizontal_angle = rad2deg(atan(with_size ./ dx));
vertical_angle = rad2deg(atan(height_size ./ dy));
horizontal_angle(horizontal_angle == NaN) = 0;
vertical_angle(vertical_angle == NaN) = 0;
Which gives back promising results, like these:
However, using a little bit more complex image like this, which is turned by 60° and 30°.
Gives back the same angle images for horizontal and vertical angles, which look like this:
After subtracting both images from each other, I get the following image - which shows that there is a difference between those two.
So, I have the following questions: How can I proof this concept? Is the math correct, and the test case is just poorly chosen? Is the angle difference from horizontal to vertical angles in the two images too close? Are there any errors in the calculation ?
While my previous code may looks good, it had a flaw. I tested it with smaller images (5x5,3x3 and so on) and saw, that there is an offset created by the difference picture (dx,dy) made by the convolution. It is simple not possible to map the difference picture (which holds the difference between two pixels) to the pixels itself, since the difference picture is smaller than the original one.
For a fast fix, I do a downsampling. So I changed the filter mask to:
%get the horizontal differences
dx = abs(conv2(depth,[1 0 -1],'valid'));
%get the vertical differences
dy = abs(conv2(depth,[1 0 -1]','valid'));
And changed the angle function to:
%get the angles by the tangent
horizontal_angle = rad2deg(atan(with_size(2:end-1,2:end-1)...
./ dx(2:end-1,:)))
vertical_angle = rad2deg(atan(height_size(2:end-1,2:end-1)...
./ dy(:,2:end-1)))
Also I used a padding function to get the angle map to the same size as the original images.
horizontal_angle = padarray(horizontal_angle,[1 1],0);
vertical_angle = padarray(vertical_angle[1 1],0);
i am mechanical engineering student working on a project to automatically detect the weld seam (The seam is a edge that is to be welded) present in a workshop. This gives a basic terminology involved in welding (http://i.imgur.com/Hfwjq0w.jpg).
To separate the weldment from the other objects, i have taken the background image and subtracted the foreground image having the weldment to obatin only the weldment(http://i.imgur.com/v7yBWs1.jpg). After image subtraction,there are the shadow ,glare and remnant noises of subtracted background are still present.
As i want to automatically identify only the weld seam without the outer boundary of weldment, i have tried to detect the edges in the weldment image using canny algorithm and tried to eliminate the isolated noises using the function bwareopen.I have somehow obtained the approximate boundary of weldment and weld seam. The threshold i have used are purely on trial and error approach as dont know a way to automatically set a threshold to detect them.
The problem now i am facing is that i cant specify an definite threshold as this algorithm should be able to identify the seam of any material regardless of its surface texture,glare and shadow present there. I need some assistance to remove the glare,shadow and isolated points from the background subtracted image.
Also i need help to get rid of the outer boundary and obtain only smooth weld seam from starting point to end point.
i have tried to use the following code:
a=imread('imageofworkpiece.jpg'); %http://i.imgur.com/3ngu235.jpg
b=imread('background.jpg'); %http://i.imgur.com/DrF6wC2.jpg
Ip = imsubtract(b,a);
imshow(Ip) % weldment separated %http://i.imgur.com/v7yBWs1.jpg
BW = rgb2gray(Ip);
c=edge(BW,'canny',0.05); % by trial and error
figure;imshow(c) % %http://i.imgur.com/1UQ8E3D.jpg
bw = bwareaopen(c, 100); % by trial and error
figure;imshow(bw) %http://i.imgur.com/Gnjy2aS.jpg
Can anybody please suggest me a adaptive way to set a threhold and remove the outer boundary to detect only the seam? Thank you
Well this doesn't solve your problem of finding an automatic thresholding algorithm. but I can help with isolation the seam. The seam is along the y axis (will this always be the case?) so I used hough transform to isolate only near vertical lines. Normally it finds all lines but I restricted the theta search parameter. The code I'm using now happens to highlight the longest line segment (I got it directly from the matlab website) and it is coincidentally the weld seam. This was purely coincidental. But using your bwareaopened image as input the hough line detector is able to find the seam. Of course it required a bit of playing around to work, so you are stuck at your original problem of finding optimal settings somehow
Maybe this can be a springboard for someone else
a=imread('weldment.jpg'); %http://i.imgur.com/3ngu235.jpg
b=imread('weld_bg.jpg'); %http://i.imgur.com/DrF6wC2.jpg
Ip = imsubtract(b,a);
imshow(Ip) % weldment separated %http://i.imgur.com/v7yBWs1.jpg
BW = rgb2gray(Ip);
c=edge(BW,'canny',0.05); % by trial and error
bw = bwareaopen(c, 100); % by trial and error
figure(1);imshow(c) ;title('canny') % %http://i.imgur.com/1UQ8E3D.jpg
figure(2);imshow(bw);title('bw area open') %http://i.imgur.com/Gnjy2aS.jpg
[H,T,R] = hough(bw,'RhoResolution',1,'Theta',-15:5:15);
figure(3)
imshow(H,[],'XData',T,'YData',R,...
'InitialMagnification','fit');
xlabel('\theta'), ylabel('\rho');
axis on, axis normal, hold on;
P = houghpeaks(H,5,'threshold',ceil(0.5*max(H(:))));
x = T(P(:,2)); y = R(P(:,1));
plot(x,y,'s','color','white');
% Find lines and plot them
lines = houghlines(BW,T,R,P,'FillGap',2,'MinLength',30);
figure(4), imshow(BW), hold on
max_len = 0;
for k = 1:length(lines)
xy = [lines(k).point1; lines(k).point2];
plot(xy(:,1),xy(:,2),'LineWidth',2,'Color','green');
% Plot beginnings and ends of lines
plot(xy(1,1),xy(1,2),'x','LineWidth',2,'Color','yellow');
plot(xy(2,1),xy(2,2),'x','LineWidth',2,'Color','red');
% Determine the endpoints of the longest line segment
len = norm(lines(k).point1 - lines(k).point2);
if ( len > max_len)
max_len = len;
xy_long = xy;
end
end
% highlight the longest line segment
plot(xy_long(:,1),xy_long(:,2),'LineWidth',2,'Color','blue');
from your image it looks like the weld seam will be usually very dark with sharp intensity edge so why don't you use that ?
do not use background
create derivation image
dx[y][x]=pixel[y][x]-pixel[y][x-1]
do this for whole image (if on place then x must decrease in loop!!!)
filter out all derivations lower then thresholds
if (|dx[y][x]|<threshold) dx[y][x]=0; else pixel[y][x]=255;` // or what ever values you use
how to obtain threshold value ?
compute min and max intensity and set threshold as (max-min)*scale where scale is value lower then 1.0 (start with 0.02 or 0.1 for example ...
do this also for y axis
so compute dy[][]... and combine dx[][] and dy[][] together. Either with OR or by AND logical functions
filter out artifacts
you can use morphologic filters or smooth threshold for this. After all this you will have mask of pixels of weld seam
if you need boundig box then just loop through all pixels and remember min,max x,y coords ...
[Notes]
if your images will have good lighting then you can ignore the derivation and threshold the intensity directly with something like:
threshold = 0.5*(average_intensity+lowest_intensity)
if you want really fully automate this then you have to use adaptive thresholds. So try more thresholds in a loop and remember result closest to desired output based on geometry size,position etc ...
[edit1] finally have some time/mood for this so
Intensity image threshold
you provided just single image which is far from enough to make reliable algorithm. This is the result
as you can see without further processing this is not good approach
Derivation image threshold
threshold derivation by x (10%)
threshold derivation by y (5%)
AND combination of both 10% di/dx and 1.5% di/dy
The code in C++ looks like this (sorry do not use Matlab):
int x,y,i,i0,i1,tr2,tr3;
pic1=pic0; // copy input image pic0 to pic1
pic2=pic0; // copy input image pic0 to pic2 (just to resize to desired size for derivation)
pic3=pic0; // copy input image pic0 to pic3 (just to resize to desired size for derivation)
pic1.rgb2i(); // RGB -> grayscale
// abs derivate by x
for (y=pic1.ys-1;y>0;y--)
for (x=pic1.xs-1;x>0;x--)
{
i0=pic1.p[y][x ].dd;
i1=pic1.p[y][x-1].dd;
i=i0-i1; if (i<0) i=-i;
pic2.p[y][x].dd=i;
}
// compute min,max derivation
i0=pic2.p[1][1].dd; i1=i0;
for (y=1;y<pic1.ys;y++)
for (x=1;x<pic1.xs;x++)
{
i=pic2.p[y][x].dd;
if (i0>i) i0=i;
if (i1<i) i1=i;
}
tr2=i0+((i1-i0)*100/1000);
// abs derivate by y
for (y=pic1.ys-1;y>0;y--)
for (x=pic1.xs-1;x>0;x--)
{
i0=pic1.p[y ][x].dd;
i1=pic1.p[y-1][x].dd;
i=i0-i1; if (i<0) i=-i;
pic3.p[y][x].dd=i;
}
// compute min,max derivation
i0=pic3.p[1][1].dd; i1=i0;
for (y=1;y<pic1.ys;y++)
for (x=1;x<pic1.xs;x++)
{
i=pic3.p[y][x].dd;
if (i0>i) i0=i;
if (i1<i) i1=i;
}
tr3=i0+((i1-i0)*15/1000);
// threshold the derivation images and combine them
for (y=1;y<pic1.ys;y++)
for (x=1;x<pic1.xs;x++)
{
// copy original (pic0) pixel for non thresholded areas the rest fill with green color
if ((pic2.p[y][x].dd>=tr2)&&(pic3.p[y][x].dd>=tr3)) i=0x00FF00;
else i=pic0.p[y][x].dd;
pic1.p[y][x].dd=i;
}
pic0 is input image
pic1 is output image
pic2,pic3 are just temporary storage for derivations
pic?.xy,pic?.ys is the size of pic?
pic.p[y][x].dd is pixel axes (dd means access pixel as DWORD ...)
as you can see there is a lot of stuff around (nod visible in the first image you provided) so you need to process this further
segmentate and separate...,
use hough transform ...
filter out small artifacts ...
identify object by expected geometry properties (aspect ratio,position,size)
Adaptive thresholds:
you need for this to know the desired output image properties (not possible to reliably deduce from single image input) then create function that do the above processing with variable tr2,tr3. Try in loop more options of tr2,tr3 (loop through all values or iterate to better results and remember the best output (so you also need some function that detects the quality of output) for example:
quality=0.0; param=0.0;
for (a=0.2;a<=0.8;a+=0.1)
{
pic1=process_image(pic0,a);
q=detect_quality(pic1);
if (q>quality) { quality=q; param=a; pico=pic1; }
}
after this the pic1 should hold the relatively best threshold image ... You should process like this all threshold separately inside the process_image the targeted threshold must be scaled by a for example tr2=i0+((i1-i0)*a);
I am writing a drawing program, Whyteboard -- http://code.google.com/p/whyteboard/
I have implemented image rotating functionality, except that its behaviour is a little odd. I can't figure out the proper logic to make rotating the image in relation to the mouse position
My code is something similar to this:
(these are called from a mouse event handler)
def resize(self, x, y, direction=None):
"""Rotate the image"""
self.angle += 1
if self.angle > 360:
self.angle = 0
self.rotate()
def rotate(self, angle=None):
"""Rotate the image (in radians), turn it back into a bitmap"""
rad = (2 * math.pi * self.angle) / 360
if angle:
rad = (2 * math.pi * angle) / 360
img = self.img.Rotate(rad, (0, 0))
So, basically the angle to rotate the image keeps getting increased when the user moves the mouse. However, this sometimes means you have to "circle" the mouse many times to rotate an image 90 degrees, let alone 360.
But, I need it similar to other programs - how the image is rotated in relation to your mouse's position to the image.
This is the bit I'm having trouble with. I've left the question language-independent, although using Python and wxPython it could be applicable to any language
I'm assuming resize() is called for every mouse movement update. Your problem seems to be the self.angle += 1, which makes you update your angle by 1 degree on each mouse event.
A solution to your problem would be: pick the point on the image where the rotation will be centered (on this case, it's your (0,0) point on self.img.Rotate(), but usually it is the center of the image). The rotation angle should be the angle formed by the line that goes from this point to the mouse cursor minus the angle formed by the line that goes from this point to the mouse position when the user clicked.
To calculate the angle between two points, use math.atan2(y2-y1, x2-x1) which will give you the angle in radians. (you may have to change the order of the subtractions depending on your mouse position axis).
fserb's solution is the way I would go about the rotation too, but something additional to consider is your use of:
img = self.img.Rotate(rad, (0, 0))
If you are performing a bitmap image rotation in response to every mouse drag event, you are going to get a lot of data loss from the combined effect of all the interpolation required for the rotation. For example, rotating by 1 degree 360 times will give you a much blurrier image than the original.
Try having a rotation system something like this:
display_img = self.img.Rotate(rad, pos)
then use the display_img image while you are in rotation mode. When you end rotation mode (onMouseUp maybe), img = display_img.
This type of strategy is good whenever you have a lossy operation with a user preview.
Here's the solution in the end,
def rotate(self, position, origin):
""" position: mouse x/y position, origin: x/y to rotate around"""
origin_angle = self.find_angle(origin, self.center)
mouse_angle = self.find_angle(position, self.center)
angle = mouse_angle - origin_angle
# do the rotation here
def find_angle(self, a, b):
try:
answer = math.atan2((a[0] - b[0]) , (a[1] - b[1]))
except:
answer = 0
return answer