I am trying to fit an image into a predefined graphical representation of a frame (not a view.frame) using UIZoomView.
For adjusting the zoomscale in a way that the image fits into the desired frame width of 250, the code basically is:
float frameWidth=250;
float currentZoomScale=frameWidth/currentImage.size.width;
self.scrollView.zoomScale=currentZoomScale;
This works almost fine...almost. My problem is a slight inaccuracy depending on the image width.
For example, an image with a width of 640 will result in a zoomScale of 0.390625.
But the visible image width on the screen will be 1 pixel below 250. With other images of different sizes the algorithm works.
I suspect the reason is that the floating point nature of the division result collides with the integer nature of the actual screen pixels...I mean that the zoomscale should be something like 0.391 or similar (I tried 0.4, which is too big).
My questions:
Is the algorithm above the right way
to get what I want?
If yes, is there a way to take the inaccuracy into account, i.e. a better algorithm?
Thanks for any reply!
I suspect the division you are using is producing a decimal number and when converted to pixels the decimal number is scrapped because you can't have 0.5 of a pixel. You could automatically round up. Change you algorithm to this:
(currentImage.size.width+frameWidth-1)/currentImage.size.width
This will give you a whole number rounded up every time.
Related
I'm a visual/UI designer working on a project/product which has been designed by another designer. This designer provided the front-end dev with good quality PNG icons, but when the front-end dev sets the images scale to 0.7, they look blurry.
I've noticed that if we set the image's scale to 0.5, they don't look blurry at all:
0.7: [1]: http://i.stack.imgur.com/jQNYG.png
0.5: [2]: http://i.stack.imgur.com/hBShu.png
Anyone knows why does that happen?
I personally always work with 0.5 scales because I was taught so. Is there any logical reason for this?
Sorry if the answer is obvious. I am really curious about that. Thanks in advance.
What is happening largely depends upon the software that you are using to shrink the image. There is a major different between reducing by 0.5 and 0.7.
If you shrink by 0.5, you are combining 4 pixels into one.
If you shrink by 0.7 you are doing fractional sampling. 10 pixels in each direction get reduced to 7.
In 0.5 sampling, you read two pixels across, read two pixels down.
In 0.7 sampling you read 1.42857142857143 pixels in each direction. In order to do that you have to weight pixel values. That is going to create blurriness in a drawing.
It's because when you halve an image's size (in both dimensions), you effectively are combining exactly 4 pixels into one. However when you do a slightly off scale (such as 0.7) you have one and a fraction of a pixel going into one pixel (in each dimension). This means the data from one pixel is being used in up to 4 pixels, instead of one, causing a blurry effect.
Sorry, making an example image would be quite difficult for me, but I hope you get the concept.
I think this has to do with interpolation, when you resize an image there is no way of knowing what is supposed to be in-between the two pixels that are essentially being merged. What the computer tries to do is guess what the new pixel is supposed to look like by looking at the pixel around it and combining the values.
So for example in the image above it will go what is in between white and orange? a less bright orange. OK lets make the merged pixel look like that. When you get to a corner there might be more orange so the new pixel will look more orangey, you get the point.
Now when you scale by 0.5 the computer looks at the pixels and merges all the pixels together at a constant rate. What I mean by that is if you look at an image and try to divide it in half you will always merge 4 pixels together however if you scale by 0.7 your merging an irregular amount of pixels resulting in different concentrations of pixels as the image is scaled which results in a blurry image.
If you don't understand this I understand, I kinda went off on a tangent.... if you need more clarification comment bellow :)
Add an .img-crisp class to the image:
.img-crisp {
image-rendering: -moz-crisp-edges; /* Firefox */
image-rendering: -o-crisp-edges; /* Opera */
image-rendering: -webkit-optimize-contrast; /* Webkit (non-standard naming) */
image-rendering: crisp-edges;
-ms-interpolation-mode: nearest-neighbor; /* IE (non-standard property) */
}
The image-rendering CSS property sets an image scaling algorithm. The
property applies to an element itself, to any images set in its other
properties, and to its descendants.
Source.
I have an image of the size 640*640*3, while another image of the size 125*314*3. I want to obtain the size ratio of the second image to the first image, but I can't find a way to do it.
I have tried the traditional divide method, as well as using rdivide but both are not working.
If I use the traditional approach of multiplying the image 3D values first, then compare, will the approach be correct?
For example, I would do something like 640*640*3 = 1,228,800 then 125*314*3 = 117,750 and finally, take 117,750 / 1,228,800 = 0.09. Is 0.09 the right answer?
I'm assuming you are referring to the ratio of the areas between the two images. If this is the case, just use the width and height. This looks like you are using RGB images, so don't use the number of channels. However, the number of channels cancels out when you use them in finding the ratio.
Therefore, yes your approach is correct:
(125*314) / (640*640) = 0.0958
This means that the smaller (or second) image occupies roughly 9.5% of the larger (or first) image.
That depends what you mean by size ratio.
Looks like you have RGB images, so if you mean the area, then it is (640*640)/(125*314), if you mean the height, then it is 640/314, more options too, be more specific in your question.
I have an image (logical values), like this
I need to get this image resampled from pixel to mm or cm; this is the code I use to get the resampling:
function [ Ires ] = imresample3( I, pixDim )
[r,c]=size(I);
x=1:1:c;
y=1:1:r;
[X,Y]=meshgrid(x,y);
rn=r*pixDim;
cn=c*pixDim;
xNew=1:pixDim:cn;
yNew=1:pixDim:rn;
[Xnew,Ynew]=meshgrid(xNew,yNew);
Id=double(I);
Ires=interp2(X,Y,Id,Xnew,Ynew);
end
What I get is a black image. I suspect that this code does something that is not what I have in mind: it seems to take only the upper-left part of the image.
What I want is, instead, to have the same image on a mm/cm scale: what I expect is that every white pixel should be mapped from the original position to the new position (in mm/cm); what happen is certainly not what I expect.
I'm not sure that interp2 is the right command to use.
I don't want to resize the image, I just want to go from pixel world to mm/cm world.
pixDim is of course the dimension of the image pixel, obtained dividing the height of the ear in cm by the height of the ear in mm (and it is on average 0.019 cm).
Any ideas?
EDIT: I was quite sure that the code had no sense, but someone told me to do that way...anyway, if I have two edged ears, I need first to scale both the the real dimension and then perform some operations on them. What I mean with "real dimension" is that if one has size 6.5x3.5cm and the other has size 6x3.2cm, I need to perform operations on this dimensions.
I don't get how can I move from the pixel dimension to cm dimension BEFORE doing operation.
I want to move from one world to the other because I want to get rid of the capturing distance (because I suppose that if a picture of the ear is taken near and the other is taken far, they should have different size in pixel dimension).
Am I correct? There is a way to do it? I thought I can plot the ear scaling the axis, but then I suppose I cannot subtract one from the other, right?
Matlab does not use units. To apply your factor of 0.019cm/pixel you have to scale by a factor of 0.019 to have a 1cm grid, but this would cause any artefact below a size of 1cm to be lost.
Best practice is to display the data using multiple axis, one for cm and one for pixels. It's explained here: http://www.mathworks.de/de/help/matlab/creating_plots/using-multiple-x-and-y-axes.html
Any function processing the data should be independent of the scale or use the scale factor as an input argument, everything else is a sign of some serious algorithmic issues.
My math must be very rusty. I have to come up with an algorithm that will take a known:
x
y
width
height
of elements in a document and translate them to the same area on a different hardware device. For example, The document is being created for print (let's assume 8.5"x11" letter size) and elements inside of this document will then be transferred to a proprietary e-reader.
Also, the known facts about the e-reader, the screen is 825x1200 pixels portrait. There are 150 pixels per inch.
I am given the source elements from the printed document in points (72 Postscript points per inch).
So far I have an algorithm that get's close, but it needs to be exact, and I have a feeling I need to incorporate aspect ratio into the picture. What I am doing now is:
x (in pixels) = ( x(in points)/width(of document in points) ) * width(of ereader in pixels)
etc.
Any clues?
Thanks!
You may want to revert the order of your operations to reduce the effect of integer truncation, as follows:
x (in pixels) = x(in points) * width(of ereader in pixels) / width(of document in points)
I don't think you have an aspect ratio problem, unless you forgot to mention that your e-reader device has non-square pixels. In that case you will have a different amount of pixels per inch horizontally and vertically on the device's screen, so you will use the horizontal ppi for x and the vertical ppi for y.
assuming your coordinates are integer numbers, the formula x/width is truncating (integer division). What you need is to perform the division/multiplication in floating point numbers, then truncate. Something like
(int)(((double)x)/width1*width2)
should do the trick (using C-like conversion to double and int)
When creating vector graphics for PDFs, I use one of the "create" functions for PDF rendering, for instance cairo_pdf_surface_create_for_stream. The signature of that function is:
cairo_surface_t * cairo_pdf_surface_create_for_stream (cairo_write_func_t write_func,
void *closure,
double width_in_points,
double height_in_points);
Now, I can set the size of the surface in points, but the size of one point is seemingly hardcoded. in the description it says:
width_in_points: width of the surface, in points (1 point == 1/72.0 inch)
height_in_points: height of the surface, in points (1 point == 1/72.0 inch)
As you can see, 1pt = 1/72" (72 dpi). But how do I change that setting?
I could factor something into the size, when using a different resolution and compensate that way, but this seems to me like worst practice ever.
A point is a standard typograpical unit of measure. Whether or not you're talking about Cairo, a point is simply 1/72". It's not some setting you change, just like the fact that you don't change the number of inches in a foot.
The whole reason for using a physical measurement (points) instead of a screen-dependent one (pixels) is resolution-independence. This is a Good Thing.
What are you hoping to accomplish by changing the DPI?
If by "change the dpi" you want to draw at a different scale than 1/72" you can use cairo_scale(). If you are referring to the dpi of fallback images (regions that are rasterized becasue they can not be drawn natively by pdf) use cairo_surface_set_fallback_resolution().