I am trying to use postscript to watermark postscript files. I am doing this with setpagedevice like so:
<<
/EndPage {
exch pop 2 lt {
/Times-Roman 40 selectfont
.6 setgray 300 300 moveto 30 rotate (Watermark) show
true}
{false} ifelse
} bind
>> setpagedevice
(file_to_watermark.ps) run
This works great, but I would like the watermark to be centered on the page, regardless of page size (this code needs to work for varying sizes of file_to_watermark.ps). My code right now is positioning the watermark based on specific coordinates, which obviously doesn't center the mark if a different file_to_watermark.ps is used with a different page size (i.e. legal, letter, etc.). Is there someway to retrieve the page size of the current file_to_watermark.ps and center-on-page the watermark based on that rather than predefined coordinates?
You could extract the current media size from the pagedevice dictionary:
currentpagedevice /PageSize get
which returns the width and height (in points) on the stack. You can then use the stringwidth operator to calculate the amount of space occupied by the given string when printed. There's no simple way to get the vertical height, but the pointsize is as good a guide as anything for Latin fonts.
Subtracting the string width from the page width and dividing by two should be good enough, similarly for the y co-ordinate.
For real printers, instead of using the MediaSize the initial clip is sometimes a better bet:
initclip clippath flattenpath pathbbox
Related
I am displaying bitmaps with the function SetDIBitsToDevice. This function knows about the total image size via a LPBITMAPINFO structure that has Width and Height fields. It also knows about the region of interest to be drawn via the arguments XDest, YDest, Width, Height. All these are specified in pixels.
So far so good when the image is stored as a canonical one, i.e. with a row pitch (number of bytes between a pixel and the one immediately below) that matches the image width in bytes, with padding (if necessary) to reach the next multiple of four bytes.
For technical reasons, I have images with a larger pitch (but still a multiple of four). For instance, width=1000 but pitch=1024. For a grayscale image (1 byte per pixel), I can trick the function by declaring a width of 1024 in LPBITMAPINFO and a width of 1000 when passed to SetDIBitsToDevice.
But for a 3 bytes per pixels image (RGB), I am stuck because 1024 bytes do not correspond to an integer number of pixels, and I see no way to specify that pitch.
Do you see a workaround or something I missed in the documentation ? (I don't think that the field SizeImage can be of any use.)
Maybe it's just my head spinning, but there seems to be no documentation on the units of measure for HPDF's HPDF_Font_TextWidth() function, nor can I figure it out.
The number I get for a particular text of 7 characters is around 3000. The rendered text seems to be around 80 pixels, which is also returned from HPDF_Page_TextWidth().
HPDF_Font_TextWidth() does not know the font size so it must use some other unit. What is it?
And is that the same unit that HPDF_Font_GetBBox() returns?
I'm actually trying to put text in the center of a rectangle, and need the width and height of the text in the units of the rectangle.
This is an old post but I just stumbled upon it because I had the same issue. As far as I know, looking into the source of HPDF_Font_GetUnicodeWidth(), the units that it returns needs to be multiplied by the font size, then divided by 1000 to get the width in points, which is what the rest of the PDF coordinate system uses.
width = (HPDF_Font_TextWidth() * font_size) / 1000.0;
All the following return EM units, which must be divided by 1000 and multiplied by the point size to get points, as stated above:
The units are relative to the baseline. Descender, BBox left & bottom are negative. The zone between caps Height and ascender is for diacritics.
To calculate the height of a slug of text, compute caps height less descender, or ascender less descender if your text has upper-case diacritics.
Keyword: Haru PDF
Picture a piece of metal bar, 20mm long, about 30mm round. On the bar, there is numbers stamped. 10 characters, 4.5mm high, spread around approximately 120° of the circumference.
I need to perform OCR on the characters BUT the text characters are not all visible in one image. Three images spaced at around 30° seems to look ok.
Next issue is the metal is freshly machined and the text characters do not seem to OCR well; I think due to the lack of real contrast. ie black/white difference.
Does anyone have any ideas on how these characters could be OCR'd??
You could try this ImageMagick command to increase 'contrast'. It basically leaves over only 2 values (zero or maximum) for each color. Every value below the threshold gets set to 0, values above the threshold get set to 255 (or 65535 if working at 16-bit depth):
convert original.jpg -threshold 50% modified.jpg
Play with the value of 50% to get best results -- set it higher or lower... Depending on your input image, this could already be enough to get images that are OK for OCR-ing.
I have an application that is on a mobile device. I am moving resolutions of my app from 240W x 320L to 640W X 480L.
I have a bunch of columns that have their width in pixels (say 55 for example). I need to convert this to the new resolution.
"Easy", I thought 640/240 = 2 2/3. So I just take 55 * 2.6666667 and that is my new width.
Alas, that did not work. My columns (all together) are larger than my screen size now.
I tried 55 * 2 and that is too small. I am sure I can get there with trial an error, but I want to know an exact ratio.
So, what am I missing? how do I calculate my new column widths (other than by trial and error).
Rounding is your problem; Suppose that you have 24 columns of 10 pixels on the 240 pixel display. You calculate the new width: 10*2.667 = 27 so the total width sums to: 648 > 640. Oops...
To get this right you need to scale the absolute location of the column. That is if column number k begins on x-coordinate = X then after scaling it should begin on round(X*2.667). After this subtract rounded right side X from rounded left side X to get the width. This way you will round widths down and some up, but the total width will remain inside your limits.
The screen DPI is changing when resolution changes. So you need to take this into account.
Check this link about DPI Aware apps and search according to your platform (Native or CF)
I think your logic is good, but maybe you had rounding errors? If you want to make sure the total width is less than the screen resolution, after multiplying by the scale factor you should always round down to the nearest integer to get the width in pixels.
Also, if your columns have any padding, borders, or other space between them, you would have to take that into account as well.
If you can run on a desktop environment, I know there are "pixel ruler" sort of tools to actually measure things on the screen, you can search Google for them.
I just want to know if the pixel unit is something that doesn't change, and if we can convert from pixels to let's say centimeters ?
Similar to this question which asks about points instead of centimeters. There are 72 points per inch and there are 2.54 centimeters per inch, so just substitute 2.54 for 72 in the answer to that question. I'll quote and correct my answer here:
There are 2.54 centimeters per inch; if it is sufficient to assume 96 pixels per inch, the formula is rather simple:
centimeters = pixels * 2.54 / 96
There is a way to get the configured pixels per inch of your display for Microsoft Windows called GetDeviceCaps. Microsoft has a guide called "Developing DPI-Aware Applications", look for the section "Creating DPI-Aware Fonts".
Converting pixels to centimeters depends on the DPI (dots per inch) of the media displaying the image, i.e. monitor, laser printer, etc.
http://wiki.answers.com/Q/How_do_you_convert_pixels_into_centimeters
I'm going to go out on a limb and just guess that you want to be able to display things to the user on their monitor, scaled to be very close to its real life size.
IF this is the case, I would recommend either displaying your items next to real life items (credit cards, dollar bills, pop cans, etc) or even better, allow the user to hold something up to the screen like a credit card or dollar bill or ruler. You could then have them scale a slider or something similar to meet the width or height of that object.
By holding a credit card, something with a relatively known height and width, up to the screen, you can easily determine the ratio of pixels to inches and use that to your hearts content.
Wiki says
Most credit cards are issued by local banks or credit unions, and are the shape and size specified by the ISO/IEC 7810 standard as ID-1 (85.60 × 53.98 mm)
Using mspaint, a credit card of mine is exactly 212 pixels tall, thats 53.98mm / 212 pixels = 3.92 pixels per mm. Multiply by 10 and that's 39.2 pixels per cm.
You could EASILY do that programatically via javascript, C#, flash, whatever you want.
You can convert from pixels to centimeters, but it's not a consistent conversion. It will depend on the size and resolution of the display device in question. The definition of a pixel will not change, but the size of a pixel will vary on different display devices.
No, different mediums & monitors have different pixel density.
For instance a desktop monitor may have 75 pixels per inch whereas a print may be outputted at 300.
Here is a list of displays by pixel density
In Adobe Illustrator CS3 :(, the figure I get is 1 cm = 28.347 pixels. Note I am using an iMac 7. that has a resolution of 102 pixels per inch, 40 ppcm according to the link http://en.wikipedia.org/wiki/List_of_displays_by_pixel_density provided by rebo.
I created an Adobe Illustrator CS3 document using javascript to test the value of 1 cm = 28.347 pixels and it matches perfectly.
I know this question is very old but I was trying to find an answer to it and decided to share my findings.
Regards
The pixel system is that it depend on your screen resolution.
Well , first you should get the dpi(density pixel perInch) of your screen.
For example your screen dpi is 96;
1 CM = 37.795276F Pixel in 96dpi.
37.795276F / 96F = 0.03937007F which is each pixel in 1dpi.
now you can make it adapted to your screen by getting the current dpi of screen and multiply it to 0.03937007F. then you have each Centimeter in your desire dpi(screen resolution)
lets set scenario .
I want to make a methode which get CM and return pixel base on screen Dpi;
public float CentimeterToPixel(int valueCM, float dpi)
{
return 0.03937007F * dpi * valueCM;
}
if you want to make it more accurate you have to approach dpiX & dpiY.
for example in C# winforms You can add an object of Graphics from
System.Drawing And System.Drawing.Drawing2D then Get it dpiX & dpiY value and design youre area based on it to have more acurate calculation(in some case that horizontal resolution differ from vertical).
See the code bellow.
using System;
using System.Windows.Forms;
using System.Drawing;
using System.Drawing.Drawing2D;
using System.Drawing.Imaging;
namespace MyApp
{
static class MyAppClass
{
private static Bitmap bmp = new Bitmap(1, 1);// a simple bitmap that automaticaly created base on current screen resolution
private static Graphics graphic = Graphics.FromImage(bmp);
public static float CentimeterToPixelWidth(int valueCM)
{
return 0.03937007F * graphic.DpiX * valueCM;
}
public static float CentimeterToPixelHeight(int valueCM)
{
return 0.03937007F * graphic.DpiY * valueCM;
}
}
}
Whish it help you, Heydar.
The size of pixels change depending on the display device.
The following "found" code uses api calls to determine pixel density Get screen DPI in .NET
("Found" as in I googled it but haven't tried it)
As far as I understand it, a PIXEL is:
Picture Element
thus it depends on two things:
(a) Resolution
(b) Physical Screen size
Thus if you divide screen size by resolution, this should give you CM per Pixel.