My math must be very rusty. I have to come up with an algorithm that will take a known:
x
y
width
height
of elements in a document and translate them to the same area on a different hardware device. For example, The document is being created for print (let's assume 8.5"x11" letter size) and elements inside of this document will then be transferred to a proprietary e-reader.
Also, the known facts about the e-reader, the screen is 825x1200 pixels portrait. There are 150 pixels per inch.
I am given the source elements from the printed document in points (72 Postscript points per inch).
So far I have an algorithm that get's close, but it needs to be exact, and I have a feeling I need to incorporate aspect ratio into the picture. What I am doing now is:
x (in pixels) = ( x(in points)/width(of document in points) ) * width(of ereader in pixels)
etc.
Any clues?
Thanks!
You may want to revert the order of your operations to reduce the effect of integer truncation, as follows:
x (in pixels) = x(in points) * width(of ereader in pixels) / width(of document in points)
I don't think you have an aspect ratio problem, unless you forgot to mention that your e-reader device has non-square pixels. In that case you will have a different amount of pixels per inch horizontally and vertically on the device's screen, so you will use the horizontal ppi for x and the vertical ppi for y.
assuming your coordinates are integer numbers, the formula x/width is truncating (integer division). What you need is to perform the division/multiplication in floating point numbers, then truncate. Something like
(int)(((double)x)/width1*width2)
should do the trick (using C-like conversion to double and int)
Related
I have a lot of (millions) of polygons from openstreetmap-data with mostly (more than 99%) exactly four coordinates representing houses.
Example
I currently save the four coordinates for each house explicitly as Tuple of floats (Latitude and Longitude), hence taking 32 bytes of memory.
Is there a way to store this information in a compressed way (fewer than 32 byte) since the four coordinates only differ very few in the last decimals?
If your map patch is not too large, you can store relative coordinates against some base point (for example, bottom left corner). Get these differences, norm them by map size like this:
uint16_diff = (uint16) 65535 * (lat - latbottom) / (lattop - latbottom)
This approach allows to store 16-bit integer values.
For rectangles (you can store them in separate list) there is a way to store 5 16-bit values instead of 8 values - coordinates of left top corner, width, height, and angle of rotation (there might be another sets of data, for example, including the second corner)
Combining both these methods, one might get data size loss upto 3.2 times
As #MBo said, you can store one corner of each house and compress the other three corners as relative to the first corner.
Also, if buildings are so similar you can set a "dictionary" of buildings. For each building you store its index in the dictionary and some feature, like its first corner coordinates and rotation.
You are giving no information on the resolution you want to keep.
Assuming 1 m accuracy is enough, 24 bits can cover up to 16000 km. Then 8 bits should also be enough to represent the size information (up to 256 m).
This would make 8 bytes per house.
More aggressive compression for instance with Huffman coding will probably not work on the locations (relatively uniform distribution); a little better on the sizes, but the benefit is marginal.
Maybe it's just my head spinning, but there seems to be no documentation on the units of measure for HPDF's HPDF_Font_TextWidth() function, nor can I figure it out.
The number I get for a particular text of 7 characters is around 3000. The rendered text seems to be around 80 pixels, which is also returned from HPDF_Page_TextWidth().
HPDF_Font_TextWidth() does not know the font size so it must use some other unit. What is it?
And is that the same unit that HPDF_Font_GetBBox() returns?
I'm actually trying to put text in the center of a rectangle, and need the width and height of the text in the units of the rectangle.
This is an old post but I just stumbled upon it because I had the same issue. As far as I know, looking into the source of HPDF_Font_GetUnicodeWidth(), the units that it returns needs to be multiplied by the font size, then divided by 1000 to get the width in points, which is what the rest of the PDF coordinate system uses.
width = (HPDF_Font_TextWidth() * font_size) / 1000.0;
All the following return EM units, which must be divided by 1000 and multiplied by the point size to get points, as stated above:
The units are relative to the baseline. Descender, BBox left & bottom are negative. The zone between caps Height and ascender is for diacritics.
To calculate the height of a slug of text, compute caps height less descender, or ascender less descender if your text has upper-case diacritics.
Keyword: Haru PDF
I HAVE HEAVILY EDITED THIS QUESTION TO PROVIDE MORE DETAILS
I came across a limitation when drawing to CanvasRenderingContext2D via EaselJS framework. I have objects like this:
but when the position of those objects surpasses couple million pixels the drawings start to crumble apart. This is the same object with x position 58524928. (The parent container is moved to -58524928 so that we can see the object on stage.) The more I offset the object the more it will crumble. Also when I try to move the object - drag it with mouse - it will "jump" like it was subjected to a large grid.
This is EaseJS framework and the shapes are ultimately drawn to the CanvasRenderingContext2D via the drawImage() method. Here is snippet from the code:
ctx.drawImage(cacheCanvas, this._cacheOffsetX+this._filterOffsetX, this._cacheOffsetY+this._filterOffsetY, cacheCanvas.width/scale, cacheCanvas.height/scale);
I suppose it has something to do with the limited number of real numbers in JavaScript:
Note that there are infinitely many real numbers, but only a finite
number of them (18437736874454810627, to be exact) can be represented
exactly by the JavaScript floating-point format. This means that when
you're working with real numbers in JavaScript, the representation of
the number will often be an approximation of the actual number.
Source: JavaScript: The Definitive Guide
Can someone confirm/reject my assumption? 58 million (58524928) does not seems so much to me, is it some inefficiency of EaselJS or it is a limit of the Canvas?
PS:
Scaling has no effect. I have drawn everything 1000 times smaller and 1000 times closer with no effect. Equally, if you scale the object up 1000 times while still x:58 million it will not look crumbled. But move it to 50 billion and you are where you started. Basically offset divided by size is constant limit for details.
EDIT
Here is example jsfiddle.net/wzbsbtgc/2. Basically there are two separate problems
If I use huge numbers as parameters for the drawing itself (red curve) it will be distorted. This can be avoided by using smaller numbers and moving the DisplayObject instead (blue curve).
In both cases it is not possible to move the DisplayObject by 1px. I think this is explained in GameAlchemist's post.
Any advice/workaround for the second problem is welcome.
It appears that Context2D uses lower precision numbers for transforms. I haven't confirmed the precision yet, but my guess is that it is using floats instead of doubles.
As such, with higher values, the transform method (and other similar C2D methods) that EaselJS relies on loses precision, similar to what GameAlchemist describes. You can see this issue reproduced using pure C2D calls here:
http://jsfiddle.net/9fLff2we/
The best workaround that I can think of, would be to precalculate the "final" values external to the transform methods. Normal JS numbers are higher precision than what C2D is using, so this should solve the issue. Really rough example to illustrate:
http://jsfiddle.net/wzbsbtgc/3/
The behavior that you see is related to the way figures are represented in the IEEE 754 standard.
While Javascript uses 64bits floats, WebGL uses only 32bits float, and since most (?all?) canvases are webGL accelerated, all your numbers will be (down)converted before the draw.
The IEEE 754 32 bits standard uses 32 bits to represent a number : 1 bit for sign, 8 exponent bits, and then only 23 bits for the mantissa.
Let's call IEEE_754_32max :
IEEE_754_32max = ( 1 << 23 ) -1 = 8.388.6071 (8+ millions)
We can have full precision for integers only in the [-IEEE_754_32max, IEEE_754_32max] range.
Beyond that point, the exponent will be used, and we'll loose the weak bits of the mantissa.
For instance ( 10 millions + 1 ) = 10.000.001 is too big, it can't fit into 23 bits,so it will be stored as
10.000.001 = 10.000.00• * 10 = 1e7 = 10.000.000
- We lost the final '1' -
The grid effect that you see is linked to the exponent being used /precision being lost : with figures such as 58524928, we need 26 bits to represent the figure. So 3 bits are lost, and we have, for instance :
58524928 + 7 == 58524928
So when using a figure that is near from 58524928, it will either be rounded to 58524928, OR 'jump' to the nearest possible figure : you have your grid effect.
Solution ?
-->> Change the units you are using for your applications, to have much smaller figures. Maybe you're using mm --> use meters or kilometers.
Mind that the precision you are using is an illusion : display resolution is the first limit, and the mouse is 1 pixel precise at most, so even with a 4K display, there's no way 32 bit floats can be a limit.
Choose the right measure unit to fit your all your coordinates in a smaller range and you'll solve your issue.
More clearly : you must change the units you are using for the display. Which does not mean you have to trade accuracy : you just have to do the translation + scaling by yourself before drawing : that way you still use the Javascript IEEE 64 bits accuracy and you've got no more those 32 bits rounding issue.
(you might override the x, y properties with getters/setters
Object.defineProperty(targetObject, 'x', { get : function() { return view.pixelWidth*(this.rx-view.left)/view.width ; } }
)
You can use any sized drawing coordinates that you desire.
Canvas will clip your drawing to the display area of the canvas element.
For example, here's a demo that starts drawing a line from x = -50000000 and finishes on the canvas. Only the visible portion of the line is rendered. All non-visible (off-canvas) points are clipped.
var canvas=document.getElementById("canvas");
var ctx=canvas.getContext("2d");
var cw=canvas.width;
var ch=canvas.height;
ctx.beginPath();
ctx.moveTo(-50000000,100);
ctx.lineTo(150,100);
ctx.stroke();
body{ background-color: ivory; padding:10px; }
#canvas{border:1px solid red;}
<h4>This line starts at x = negative 50 million!</h4>
<canvas id="canvas" width=300 height=300></canvas>
Remember that the target audience for W3C standard is mainly browser vendors. The unsigned long value (or 232) addresses more the underlying system for creating a bitmap by the browser. The standard says values in this range are valid, but there is no guarantee the underlying system will be able to provide a bitmap that large (most browsers today limits the bitmap to much smaller sizes than this). You stated that you don't mean the canvas element itself, but the link reference is the interface definition of the element so I just wanted to point that out in regards to the number range.
From the JavaScript side of things, where we developers usually are, and with the exception of typed arrays, there is no such thing as ulong etc. Only Number (aka unrestricted double) which is signed and stores numbers in 64-bit, formatted as IEEE-754.
The valid range for Number is:
Number.MIN_VALUE = 5e-324
Number.MAX_VALUE = 1.7976931348623157e+308
You can use any values in this range with canvas for your vector paths. Canvas will clip them to the bitmap based on the current transformation matrix when the paths are rasterized.
If you by drawing mean another bitmap (ie. Image, Canvas, Video) then they will be subject to the same system and browser capabilities/restrictions as the target canvas itself. Positioning (direct or via transformation) is limited (in sum) by the range of a Number.
On Mac OS X, I need to convert a point measurement to pixel measurement.
The formula which I know is
pixel = point * resolution (in terms of dpi) / 72
I have few measurements which I want to convert to pixels. Although reverse cases would also be possible.
How to do this in Cocoa or Quartz?
Does it depend on axis? Means would 5 pixels in Y-axis would be same as 5 pixels in X-axis in terms of points? Or is it safe to assume that resolution is same for both X and Y axis?
Please note that I do not want to make any assumption about resolution.
You probably don't want to convert anything to pixels. OS X now works in points; so for example when you draw a rectangle you are giving its dimensions in points, not pixels.
A OS X Quartz point is related to, but not the same as, a (computer) printing point - the two used to be the same, 72 points = 1". However WYSIWYG has become "some scale of" and 72 points (note not pixels) on screen is not a physical inch as screen pixel densities have increased. However 72 points is still an "abstract" inch.
In OS X you draw in points, the OS takes care of mapping those points to the physical pixels on the screen; which roughly translates to screens up to a certain density being treated as 72ppi (pixels/inch) or 1 pixel/point, and higher density screens being treated as 144ppi or 2 pixel/point - for example these are the ppi assigned to standard and Retina screenshots.
If you really, really need to know what a point translates to in pixels you can find out, but this changes depending on what screen a window is on.
For details of all of this you can start with Points Don’t Correspond to Pixels and then read the rest of the High Resolution Guidelines for OS X that reference is part of. How to find point to backing store mapping, if you really, really need to know, is covered.
HTH
There is an opportunity for confusion when your user specifies a length in 'points' as to whether they mean typography points of size 1/72" or lenghts compared to Mac UI points, which vary with the display resolution.
In Mac OS, "points" are pixels, unless you are in high resolution mode, in which case points are 2x2 pixels. The "Points Don't Correspond to Pixels" page says "on a high-resolution display, there are four onscreen pixels for each point," indicating a 4:1 correspondence in hi-res, and 1:1 correspondence in standard res. It also notes:
Note: The term points has its origin in the print industry, which defines 72 points as to equal 1 inch in physical space. When used in reference to high resolution in OS X, points in user space do not have any relation to measurements in the physical world.
To convert a typographer's point size to something physically the same size on a Mac screen in Mac points, your formula is exactly correct. You might rename 'pixel' to Mac points:
MacPoints = (TypesettersPoints/72)*ResolutionInDotsPerInch
Best to stick with points.
First you would need to know where the point is coming from. Views, Windows and Screens all have their own coordinate systems.
You would need to do several things to translate this to the pixel grid of a given screen.
First you need to convert your point to the screen coordinates.
Then to coordinates of pixel grid of the screen.
You will also need to find out the current display properties to know if it's a retina display or not. ( makes a big difference.)
All of the methods are in NSView, NSWindow, NSScreen.
All of the functions are part of Quartz Display services. You will need ones for CGDisplay you might need ones for CGWindow.
You will also need to have your app observe notifications for display configuration changes and figure out the hard part, when a point is in a coordinate space that overlaps two screens.
I leave it to you to do the rest and decide if you really need this.
I am trying to fit an image into a predefined graphical representation of a frame (not a view.frame) using UIZoomView.
For adjusting the zoomscale in a way that the image fits into the desired frame width of 250, the code basically is:
float frameWidth=250;
float currentZoomScale=frameWidth/currentImage.size.width;
self.scrollView.zoomScale=currentZoomScale;
This works almost fine...almost. My problem is a slight inaccuracy depending on the image width.
For example, an image with a width of 640 will result in a zoomScale of 0.390625.
But the visible image width on the screen will be 1 pixel below 250. With other images of different sizes the algorithm works.
I suspect the reason is that the floating point nature of the division result collides with the integer nature of the actual screen pixels...I mean that the zoomscale should be something like 0.391 or similar (I tried 0.4, which is too big).
My questions:
Is the algorithm above the right way
to get what I want?
If yes, is there a way to take the inaccuracy into account, i.e. a better algorithm?
Thanks for any reply!
I suspect the division you are using is producing a decimal number and when converted to pixels the decimal number is scrapped because you can't have 0.5 of a pixel. You could automatically round up. Change you algorithm to this:
(currentImage.size.width+frameWidth-1)/currentImage.size.width
This will give you a whole number rounded up every time.