Imagick photo multiple image resize crop without distorting /stretching php - imagick

I wish to keep the aspect ratio of images as I resize them. I got 94 000 images I need to display as preview images on a social site. The challenge I got is some users uploaded full length photos as a result they appear stretched after re-sizing. I am using codeigniter to implement this. The file names are in a database table. This is the code I am using
if (file_exists($_SERVER["DOCUMENT_ROOT"]."/uploads/profiles /purchased_profiles/".$images_->_file_name)) {
//echo "The file $filename exists";
$thumb = new Imagick();
$thumb->readImage($_SERVER["DOCUMENT_ROOT"]."/uploads/profiles/purchased_profiles/".$images_->_file_name);
$orientation = $thumb->getImageOrientation();
switch($orientation) {
case imagick::ORIENTATION_BOTTOMRIGHT:
$thumb->rotateimage("#000", 180); // rotate 180 degrees
break;
case imagick::ORIENTATION_RIGHTTOP:
$thumb->rotateimage("#000", 90); // rotate 90 degrees CW
break;
case imagick::ORIENTATION_LEFTBOTTOM:
$thumb->rotateimage("#000", -90); // rotate 90 degrees CCW
break;
}
$thumb->resizeImage(160,160,Imagick::FILTER_LANCZOS,1);
$thumb->writeImage($_SERVER["DOCUMENT_ROOT"]."/uploads/profiles/purchased_profiles/160x160/".$images_->_file_name);
$thumb->clear();
$thumb->destroy();
}

If the images are uploaded with different sizes its a real challenge If I am to combine the solution found here how do i use imagick in php? (resize & crop) and your code I may come up with the following
if (file_exists($_SERVER["DOCUMENT_ROOT"]."/uploads/profiles/purchased_profiles/".$images_->_file_name)) {
$thumb = new Imagick();
$thumb->readImage($_SERVER["DOCUMENT_ROOT"]."/uploads/profiles/purchased_profiles/".$images_->_file_name);
$orientation = $thumb->getImageOrientation();
switch($orientation) {
case imagick::ORIENTATION_BOTTOMRIGHT:
$thumb->rotateimage("#000", 180); // rotate 180 degrees
break;
case imagick::ORIENTATION_RIGHTTOP:
$thumb->rotateimage("#000", 90); // rotate 90 degrees CW
break;
case imagick::ORIENTATION_LEFTBOTTOM:
$thumb->rotateimage("#000", -90); // rotate 90 degrees CCW
break;
}
//now check the width
$width=$thumb->getImageWidth();
//now check height
$height=$thumb->getImageHeight();
if ($height>$width) {
$new_height=160;
$new_width=(int)($width/$height*160);
$thumb->resizeImage($new_width,$new_height,Imagick::FILTER_LANCZOS,1);
$cropWidth = $thumb->getImageWidth();
$cropHeight = $thumb->getImageHeight();
$cropZoom=1;
if ($cropZoom) {
$newWidth = $cropWidth / 2;
$newHeight = $cropHeight / 2;
$thumb->cropimage(
$new_width,
$new_width,
0,
0
);
}
}
elseif ($width>$height) {
# code...
$new_width=160;
$new_height=(int)($height/$width*160);
$thumb->resizeImage($new_width,$new_height,Imagick::FILTER_LANCZOS,1);
}
else{
$thumb->resizeImage(160,160,Imagick::FILTER_LANCZOS,1);
}
$thumb->writeImage($_SERVER["DOCUMENT_ROOT"]."/uploads/profiles/purchased_profiles/160x160/".$images_->_file_name);
$thumb->clear();
$thumb->destroy(); }
You may need to crop if the image height is greater than the width so I decided to crop with dimensions equal to the width from the to left corner most likely you wont miss the face of the person this way. Good luck

Related

Why doesn't the RoundRect path with gradient fill produce the correct corners on right side?

I came up with a routine to create a gradient filled rounded rectangle (button), however if I omit the code that writes the outline, the lower-right corner looks square and the upper-right seems to be not quite right either . Why is that?
note: The owner-draw button was created 23x23.
//-------------------------------------------------------------------------
// Purpose: Draw a rounded rectangle for owner-draw button
//
// Input: dis - [i] owner-draw information structure
// undermouse - [i] flag if button is under mouse
//
// Output: na
//
// Notes: This creates a standard grey type rounded rectangle for owner
// drawn buttons.
//
// This routine does not currently use undermouse to change
// gradient
//
void DrawRoundedButtonRectangle(const DRAWITEMSTRUCT& dis, BOOL undermouse)
{
UNREFERENCED_PARAMETER(undermouse);
// save DC before we modify it.
SaveDC(dis.hDC);
// create a path for the round rectangle (right/bottom is RECT format of +1)
BeginPath(dis.hDC);
RoundRect(dis.hDC, dis.rcItem.left, dis.rcItem.top, dis.rcItem.right, dis.rcItem.bottom, 6, 6);
EndPath(dis.hDC);
// save DC before changing clipping region
SaveDC(dis.hDC);
// set clipping region to be the path
SelectClipPath(dis.hDC, RGN_COPY);
TRIVERTEX vertices[2];
// setup the starting location and color (light grey)
vertices[0].x = dis.rcItem.left;
vertices[0].y = dis.rcItem.top;
vertices[0].Red = MAKEWORDHL(211, 0);
vertices[0].Green = MAKEWORDHL(211, 0);
vertices[0].Blue = MAKEWORDHL(211, 0);
vertices[0].Alpha = 0xffff;
// setup the ending location and color (grey)
vertices[1].x = dis.rcItem.right; // should this be -1 ?
vertices[1].y = dis.rcItem.bottom; // should this be -1 ?
vertices[1].Red = MAKEWORDHL(150, 0);
vertices[1].Green = MAKEWORDHL(150, 0);
vertices[1].Blue = MAKEWORDHL(150, 0);
vertices[1].Alpha = 0xffff;
// setup index to use for left to right
GRADIENT_RECT r[1];
r[0].UpperLeft = 0;
r[0].LowerRight = 1;
// fill the DC with a vertical gradient
GradientFill(dis.hDC, vertices, _countof(vertices), r, _countof(r), GRADIENT_FILL_RECT_V);
// go back to original clipping area
RestoreDC(dis.hDC, -1);
// change the path to be the outline border
if (WidenPath(dis.hDC)) {
// set clipping region to be the path
SelectClipPath(dis.hDC, RGN_COPY);
// create a gradient on the outline
GradientFill(dis.hDC, vertices, _countof(vertices), r, _countof(r), GRADIENT_FILL_RECT_V);
}
// put back the DC as we received it
RestoreDC(dis.hDC, -1);
}
The red in the pics show the background.
The bad button is generated when the WidenPath section is removed.
According to your description, I think you may be talking about this situation.
BeginPath(dis.hDC);
// RoundRect(dis.hDC, dis.rcItem.left, dis.rcItem.top, dis.rcItem.right, dis.rcItem.bottom, 6, 6);
EndPath(dis.hDC);
Let me first analyze the reason why I got this shape.
When you redraw the button, if the length and width of the redrawn button are smaller than that of the button itself, only a part of the redrawn will occur.
case WM_CREATE:
{
//Button width:230 Button height:230
button = CreateRoundRectButton(hWnd, 500, 200, 230, 230, 30, 30, BTN_ID);
return 0;
}
break;
case WM_DRAWITEM:
{
DRAWITEMSTRUCT dis;
dis.CtlType = ODT_BUTTON;
dis.CtlID = BTN_ID;
dis.hDC = GetDC(button);
dis.rcItem.left = 0;
dis.rcItem.top = 0;
dis.rcItem.right = 200; //Width of redrawing
dis.rcItem.bottom = 200; //Height of redrawing
DrawRoundedButtonRectangle(dis, TRUE);
}
In order to see the effect more clearly, I will widen the width and height.
If I omit the code that writes the outline, It only executes the following code to implement the gradient.
// fill the DC with a vertical gradient
GradientFill(dis.hDC, vertices, _countof(vertices), r, _countof(r), GRADIENT_FILL_RECT_V);
If I change the XY coordinates of the redrawing.
Actually, when you disable RoundRect, the only thing that works is GradientFill.
Updated:
The redrawn area is based on rcItem. When you draw a path, it's only the inside area that is considered and the outline is not, so WidenPath then goes on the outline and that gives the true routed rect area.

How to export fbx textures from Blender for use in Monogame

I'm trying to use Blender to make a simple playing card with a texture on either side for the face and back and load it in to Monogame. The model itself is fine and shows on the emulator when run but I can't seem to get the textures included though.
The textures on the card seem to render ok in Blender. The materials are set to 'shadeless' so they shouldn't be affected by light levels correct?
I've tried using the different version settings and various different path modes from 'copy' to 'strip path' when exporting the file.
The content manager has the output directory set to the main content folder so there shouldn't be a referencing problem with the textures I hope.
All the textures themselves are in the same folder.
The content is all loaded manually in to Visual Studio.
Here's the code I'm using to draw the card showing the different light settings I've played around with.
private void DrawCard(Model m, Vector3 v, CardFacing c, PlayerFacing p)
{
foreach (var mesh in m.Meshes)
{
foreach (var effect1 in mesh.Effects)
{
var effect = (BasicEffect)effect1;
effect.TextureEnabled = true;
effect.EnableDefaultLighting();
//effect.PreferPerPixelLighting = true;
//effect.AmbientLightColor = Color.White.ToVector3();
effect.DiffuseColor = Color.White.ToVector3();
//effect.EmissiveColor = Color.White.ToVector3() * 2f;
//effect.DirectionalLight0.Direction = Vector3.Normalize(new Vector3(0, 0, 1));
effect.Alpha = 1;
effect.VertexColorEnabled = false;
Matrix mCardFacing = new Matrix();
switch(c)
{
case CardFacing.Down:
mCardFacing = Matrix.CreateRotationY((float)(Math.PI / 180) * 180) * Matrix.CreateRotationX((float)(Math.PI / 180) * 90) * Matrix.CreateTranslation(new Vector3(0, 0, 0));
break;
case CardFacing.Up:
mCardFacing = Matrix.CreateRotationZ((float)(Math.PI / 180) * 180) * Matrix.CreateRotationX((float)(Math.PI / 180) * 90) * Matrix.CreateTranslation(new Vector3(0, 0, 0));
break;
case CardFacing.Hand:
mCardFacing = Matrix.CreateRotationX((float)(Math.PI / 180) * -20);
break;
}
Matrix mPlayerFacing = new Matrix();
switch (p)
{
case PlayerFacing.North:
mPlayerFacing = Matrix.CreateRotationZ((float)(Math.PI / 180) * 0);
break;
case PlayerFacing.East:
mPlayerFacing = Matrix.CreateRotationZ((float)(Math.PI / 180) * 90);
break;
case PlayerFacing.South:
mPlayerFacing = Matrix.CreateRotationZ((float)(Math.PI / 180) * 180);
break;
case PlayerFacing.West:
mPlayerFacing = Matrix.CreateRotationZ((float)(Math.PI / 180) * 270);
break;
}
effect.World = mCardFacing * Matrix.CreateTranslation(v) * mPlayerFacing;
effect.View = view;
effect.Projection = projection;
}
mesh.Draw();
}
}
Any ideas? Thanks.
A 3d model usually doesn't contain the texture itself. An FBX file stores where each vertex is, how they are connected and which part of the texture is at any single point.
It does not store the texture itself. Therefore, you need to load the texture separately the way you load any other texture (you need a Texture2D, even though this is a 3d model).
Then, you can assign your texture to the effect:
effect.Texture = cardTexture;
I believe that this should render the effect correctly.
This means that you can also easily swap out the texture of your model on the fly, without having to change the model itself. If you model is just a 2d quad, if might also be simpler to just generate it in code, but your current setup isn't wrong either.
Probably you have forgotten to include embedded resources into .FBX. Please, change on your export settings Path Mode to Copy and enable the button right next to it. Refer to this tutorial: https://www.youtube.com/watch?v=kEP34CbPWUo
EDIT:
That didn't work for me, as Monogame content build tool was always failing to build content with error:
*/Content/Wall2.fbx: error: The source file '*/Content/*0' does not exist!
Then I just exported .FBX without embedded textures but added texture picture manually to Content.mgcb and switched TextureFormat to 'NoChange'
And that has worked out just fine!
P.S. I would suggest moving this question to https://gamedev.stackexchange.com/

direct2d image viewer How to convert screen coordinates to image coordinates?

I'm trying to figure out how to convert the mouse position (screen coordinates) to the corresponding point on the underlying transformed image drawn on a direct2d surface.
the code here should be considered pseudo code as i'm using a modified c++/CLI wrapper around direct2d for c#, you won't be able to compile this in anything but my own project.
Render()
{
//The transform matrix combines a rotation, followed by a scaling then a translation
renderTarget.Transform = _rotate * _scale * _translate;
RectF imageBounds = new RectF(0, 0, _imageSize.Width, _imageSize.Height);
renderTarget.DrawBitmap(this._image, imageBounds, 1, BitmapInterpolationMode.Linear);
}
Zoom(float zoomfactor, PointF mousepos)
{
//mousePos is in screen coordinates. I need to convert it to image coordinates.
Matrix3x2 t = _translate.Invert();
Matrix3x2 s = _scale.Invert();
Matrix3x2 r = _rotate.Invert();
PointF center = (t * s * r).TransformPoint(mousePos);
_scale = Matrix3x2.Scale(zoomfactor, zoomfactor, center);
}
This is incorrect, the scale center starts moving around wildly when the zoomfactor increases or decreases smoothly, the resulting zoom function is not smooth and flickers a lot even though the mouse pointer is immobile on the center of the client surface. I tried all the combinations I could think of but could not figure it out.
If I set the scale center point as (imagewidth/2, imageheight/2), the resulting zoom is smooth but is always centered on the image center, so I'm pretty sure the flicker isn't due to some other buggy part of the program.
Thanks.
I finally got it right
this gives me perfectly smooth (incremental?, relative?) zooming centered on the client center
(I abandoned the mouse position idea since I wanted to use mouse movement input to drive the zoom)
protected float zoomf
{
get
{
//extract scale factor from scale matrix
return (float)Math.Sqrt((double)((_scale.M11 * _scale.M11)
+ (_scale.M21 * _scale.M21)));
}
}
public void Zoom(float factor)
{
factor = Math.Min(zoomf, 1) * 0.006f * factor;
factor += 1;
Matrix3x2 t = _translation;
t.Invert();
PointF center = t.TransformPoint(_clientCenter);
Matrix3x2 m = Matrix3x2.Scale(new SizeF(factor, factor), center);
_scale = _scale * m;
Invalidate();
}
Step1: Put android:scaleType="matrix" in ImageView XML file
Step 2: Convert screen touch points to Matrix value.
Step 3: Divide each matrix value with Screen density parameter to
get same coordinate value in all screens.
**XML**
<ImageView
android:id="#+id/myImage"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:scaleType="matrix"
android:src="#drawable/ga"/>
**JAVA**
#Override
public boolean onTouchEvent(MotionEvent event) {
float[] point = new float[]{event.getX(), event.getY()};
Matrix inverse = new Matrix();
getImageMatrix().invert(inverse);
inverse.mapPoints(point);
float density = getResources().getDisplayMetrics().density;
int[] imagePointArray = new int[2];
imagePointArray[0] = (int) (point[0] / density);
imagePointArray[1] = (int) (point[1] / density);
Rect rect = new Rect( imagePointArray[0] - 20, imagePointArray[1] - 20, imagePointArray[0] + 20, imagePointArray[1] + 20);//20 is the offset value near to the touch point
boolean b = rect.contains(267, 40);//267,40 are the predefine image coordiantes
Log.e("Touch inside ", b + "");
return true;
}

pupil detection using opencv, with infrared image

I am trying the detect the pupil from a infrared image and calculate the center of the pupil.
In my setup, i used a camera sensitive to infrared light, and I added a visible light filter to the lens and two infrared LED around the camera.
However, the image I got is blur not so clear, maybe this caused by the low resolution of the camera, whose max is about 700x500.
In the processing, the first thing i did was to convert this RGB image to gray image, how ever the result is terrible. and it got nothing in the results.
int main()
{
//load image
cv::Mat src = cv::imread("11_13_2013_15_36_09.jpg");
cvNamedWindow("original");
cv::imshow("original", src);
cv::waitKey(10);
if (src.empty())
{
std::cout << "failed to find the image";
return -1;
}
// Invert the source image and convert to graysacle
cv::Mat gray;
cv::cvtColor(~src, gray, CV_BGR2GRAY);
cv::imshow("image1", gray);
cv::waitKey(10);
// Convert to binary image by thresholding it
cv::threshold(gray, gray, 220, 255, cv::THRESH_BINARY);
cv::imshow("image2", gray);
cv::waitKey(10);
// Find all contours
std::vector<std::vector<cv::Point>>contours;
cv::findContours(gray.clone(), contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
// Fill holes in each contour
cv::drawContours(gray, contours, -1, CV_RGB(255, 255, 255), -1);
cv::imshow("image3", gray);
cv::waitKey(10);
for (int i = 0; i < contours.size(); i++)
{
double area = cv::contourArea(contours[i]);
cv::Rect rect = cv::boundingRect(contours[i]);
int radius = rect.width / 2;
// If controu is big enough and has round shape
// Then it is the pupil
if (area >= 800 &&
std::abs(1 - ((double)rect.width / (double)rect.height)) <= 0.3 &&
std::abs(1 - (area / (CV_PI * std::pow(radius, 2)))) <= 0.3)
{
cv::circle(src, cv::Point(rect.x + radius, rect.y + radius), radius, CV_RGB(255, 0, 0), 2);
}
}
cv::imshow("image", src);
cvWaitKey(0);
}
When the original image was converted, the gray image is terrible, does anyone know a better solution to this? I am completely new to this. for the rest of the code for finding the circle, if you have any comments, just tell me. and also i need to extra the position of the two glint (the light point) on the original image, does anyone has some idea?
thanks.
Try equalizing and filtering your source image before thresholding it ;)

Given aspect ratio of a rectangle, find maximum scale and angle to fit it inside another rectangle

I've read a few dozen questions on this topic, but none seem to be exactly what I'm looking for, so I'm hoping this isn't a duplicate.
I have an image, whose aspect ratio I want to maintain, because it's an image.
I want to find the largest scale factor, and corresponding angle between 0 and 90 degrees inclusive, such that the image will fit wholly inside a given rectangle.
Example 1: If the image and rectangle are the same ratio, the angle will be 0, and the scale factor will be the ratio of the rectangle's width to the image's width. (Or height-to-height.)
Example 2: If the image and rectangle ratios are the inverse of each other, the scale factor will be the same as the first example, but the angle will be 90 degrees.
So, for the general case, given image.width, image.height, rect.width, rect.height, how do I find image.scale and image.angle?
OK, I figured it out on my own.
First, calculate the aspect ratio. If your image is 1:1, there's no point in this, because the angle is always zero, and the scale is always min(Width, Height). Degeneration.
Otherwise, you can use this:
// assuming below that Width and Height are the rectangle's
_imageAspect = _image.width / _image.height;
if (_imageAspect == 1) { // div by zero implied
trace( "square image...this does not lend itself to rotation ;)" );
return;
}
_imageAspectSq = Math.pow( _imageAspect, 2 );
var rotate:Float;
var newHeight:Float;
if (Width > Height && Width / Height > _imageAspect) { // wider aspect than the image
newHeight = Height;
rotate = 0;
} else if (Height > Width && Height / Width > _imageAspect) { // skinnier aspect than the image rotated 90 degrees
newHeight = Width;
rotate = Math.PI / 2;
} else {
var hPrime = (_imageAspect * Width - _imageAspectSq * Height) / ( 1 - _imageAspectSq );
var wPrime = _imageAspect * (Height - hPrime);
rotate = Math.atan2( hPrime, wPrime );
var sine = Math.sin(rotate);
if (sine == 0) {
newHeight = Height;
} else {
newHeight = (Width - wPrime) / sine;
}
}
The first two cases are also degenerate: the image's aspect ratio is less than the rectangle. This is similar to the square-within-a-rectangle case, except that in that case, the square is always degenerate.
The code assumes radians instead of degrees, but it's not hard to convert.
(Also I'm a bit shocked that my browser's dictionary didn't have 'radians'.)

Resources