I have an Image with the Value X width and Y height.
Now I want to set the height ever to 60px.
With which calculation I can calculate the height that the image is correct resized?
I think you are trying to maintain aspect ratio. If so use the following:
ratio = orginialHeight / newHeight
newWidth = orginialWidth * ratio
I assume you want the width after the rescale to relate to the height in the same way it did before the rescale, i.e. you want the aspect ratio to remain constant.
aspect_ratio = width_old / height_old
This gives:
aspect_ratio = width_new / height_new
Thus
width_new = width_old * height_new / height_old
Which means
width_new = (60 * width_old) / height_old
For instance, assume an incoming image of 640x480 (plain old VGA). This has an aspect_ratio of 1.33333...
Rescaling this to be 60 pixels high would then require a new width of 60 * 640 / 480, or 80, which seems proper since 80/60 is indeed 1.3333...
You want to maintain an aspect ratio of y/x, which means that you need to compute y/x for the original image. Let z = y/x, then, given any new height y' (in your case, 60 px), to find the new width x':
y/x = z = y'/x'
x' = y' * z
Related
I'm running a script that resizes images that are too large. I've used "resize_to_fit" to reduce images to a specific pixel size depending on the longer side, but I'm wondering if it's possible to do it with this logic instead: for any image whose width x height product is greater than a set value, resize the image so that the new width and height values are as large as possible while still being under that value. In other words, I don't want to arbitrarily resize the dimensions more than necessary, and I'd want to retain aspect ratio in this conversion. This may be more of a math question than a ruby one, but in any case, this is what I've tried:
image = Magick::Image.read(image_file)[0];
dimensions = image.columns, image.rows
resolution = dimensions[0] * dimensions[1]
if resolution > 4000000
resolution_ratio = 4000000 / resolution.to_f
dimension_ratio = dimensions[0].to_f * resolution_ratio
img = img.resize_to_fit(dimension_ratio,dimension_ratio)
img.write("#{image}")
end
So let's say an image has a width of 2793px and a height of 1970px. The resolution would be 5,502,210. It thus goes through the conditional statement, and as of right now, outputs a new width of 2030 and height of 1432. The product of these two is 2,906,960—which is obviously well under 4,000,000. But there are other possible width x height combinations whose product could be much closer to 4,000,000 pixels than 2,906,960 is. Is there a way of determining that information, and then resizing it accordingly?
You need to properly calculate the ratio, which is a square root from your desired dimension divided by (row multiplied by col):
row, col = [2793, 1970]
ratio = Math.sqrt(4_000_000.0 / (row * col))
[row, col].map &ratio.method(:*)
#⇒ [
# [0] 2381.400006266842,
# [1] 1679.6842149465374
#]
[row, col].map(&ratio.method(:*)).reduce(:*)
#∞ 3999999.9999999995
I have 2 images, Image 1 and Image 2.
Image 1 has a size of 512(width) x 515(height).
Then Image 2 with size of 256(width) x 256(height).
Image 2 is will be used as a watermark and will be placed on top of Image 1.
I want Image 2 size to be dependent on Image 1 size. Image 2 can resize up or down depending on the size of Image 1.
The new size(width & height) of Image 2 should be 20 percent size of Image 1 and the-same time preserve its aspect ratio.
What's the algorithm to find the new size(width & height) of Image 2?
Right now, I use (20%/100) * 512 to resize it, but this does not preserve Image 2 aspect ratio.
If the two images don't have the same aspect ratio then it's mathematically impossible to scale both width and height by 20% and preserve the aspect ratio.
So, chose an axis that you will use to scale by, and scale the other one to the size that preserves the aspect ratio.
e.g, using width:
new_image1_width = 512 * (20 / 100) = 102.4
Then compute the new height to preserve the aspect ratio:
original_aspect_ratio = image2_width / image2_height = 256 / 256 = 1
new_image1_height = 102.4 / original_aspect_ratio = 102.4
Or do it the other way (this time multiplying by the ratio):
new_image1_height = 515 * (20 / 100) = 103
original_aspect_ratio = image2_width / image2_height = 256 / 256 = 1
new_image1_width = 103 * original_aspect_ratio = 103
If you have to handle arbitrary image sizes and arbitrary scale factors, you will need two switch between the two ways depending on what you want the rule to be. E.g. you could always go with the smaller of the two, or use a ratio-adjusted height unless this gives you a height larger than image 1 height, and in that case use the second way, or vice versa.
I have a four element position vector [xmin ymin width hight] that specifies the size and position of crop rectangle from image I. How can i find the new position and size for the resized image I?
It is not entirely clear, what you want, as we don't know your coordinate system. Assuming x is the horizontal axis and y is the vertical axis and your point (1,1) is at the top left corner, you can use the following snippet:
p = [xmin ymin width height];
I = I_orig(p(2):p(2)+p(4)-1,p(1):p(1)+p(3)-1);
The size is of course your specified width and height.
You can convert your original bounding box to relative values (that is assuming the image size is 1x1)
[origH origW] = size( origI(:,:,1) );
relativeBB = [xmin / origW, ymin / origH, width / origW, hight / origH];
Now, no matter how you resized your origI, you can recover the bounding box w.r.t the new size from the relative representation:
[currH currW] = size(I(:,:,1));
currBB = relativeBB .* [currW, currH, currW, currH];
You might need to round things a bit: you might find floor better for xmin and ymin and ceil more suitable for width and height.
I have some values that I need to plot into a 2D HTML5 <canvas>. All values are in the range [-1, +1] so I decided to set a transformation (scale + displacement) on the canvas 2D-context before drawing:
var scale = Math.min(canvas.width, canvas.height) / 2;
ctx.setTransform(scale, 0, 0, scale, canvas.width / 2, canvas.height / 2);
Each value is drawn using the arc method, but since I want a fixed arc-radius (no matter what scaling is used) I'm dividing the radius with the current scale value:
ctx.arc(value.X, value.Y, 2 / scale, 0, 2 * Math.PI, false);
Now, a canvas of size 200 x 200 will result in scale factor of 100, which in turn results in a arc-radius of 0.02. Unfortunately, it seems that values like 0.2 or 0.02 don't make any difference to the resulting arc-radius, only the stroke thickness is changing.
You can see this behavior in the JsFiddle. Is this a bug or am I doing something wrong?
The issue is that after scaling by a huge factor your lines you now have a lineWidth far too big to be drawn correctly with stroke.
Just adjust the lineWidth to 1/scale before drawing, and all will work fine.
I have a camera placed 10 meters faraway from a portrait (rectangle) having width = 50cm and height = 15cm, I want to get the dimensions of this portrait inside the image captured. The image captured has width=800 px and height=600 px.
How can I calculate the dimensions of the portrait inside the image? Any help please?
I am assuming the camera is located along the center normal of the portrait, looking straight at the portrait's center.
Let's define some variables.
Horizontal field of view: FOV (you need to specify)
Physical portrait width: PW = 50 cm
Width of portrait plane captured: CW cm (unknown)
Image width: IW = 800 px
Width of portrait in image space: X px (unknown)
Distance from camera to subject: D = 10 m
We know tan(FOV) = (CW cm) / (100 * D cm). Therefore CW = tan(FOV) * 100 * D cm.
We know PW / CW = X / IW. Therefore X = (IW * PW) / (tan(FOV) * 100 * D) px.
I agree with Timothy's answer, in that you need to the know the camera's field of view (FOV). I'm not sure I totally follow/agree with his method however. I think this is similar, but it differs, the FOV needs to be divided by two to split our view into two right-angled triangles. Use tan(x)=opposite/adjacent
tan(FOV/2) = (IW/2) / (Dist * 100)
where IW is the true image width (must divide by two as we are only finding finding half of the width with the right-angled triangle), Dist is the distance from the camera to the portrait (converted to cm).
Rearrange that to find the Width of the entire image (IW):
IW = tand(FOV/2) * (2*Dist*100)
You can now work out the width of each pixel (PW) using the number of pixels in the image width (800 for you).
PW = IW / NumPixels
PW = IW / 800
Now divide the true width by this value to find the number of pixels.
PixelWidth = TrueWidth / PW
The same can be done for the height, but you need your camera's field of view.
Im not sure this is the same a Timothy's answer, but I'm pretty sure this is correct.