I'm getting wrong values in some printers.
For example, dc.GetDeviceCaps(PHYSICALOFFSETX) returns 42 in some printer and LOGPIXELSX is 360, so the left margin should be 2.96 millimeters, but actually test shows that is 5 millimeters !
PD: PHYSICALOFFSETY works fine!
It depends on the printer and the driver, and possibly on how you load the paper. For example, on a lot of tractor-feed (e.g., dot-matrix) printers, there's a lot of horizontal play, and it's up to the user to load the paper correctly.
The other problem I've seen is that some printer drivers forget to swap the reported horizontal and vertical offsets and resolutions when you switch the page orientation (landscape/portrait) in the middle of a job. But that's pretty easy to detect and correct for.
Software that's supposed to print data in pre-drawn boxes on forms (e.g., invoices, checks, etc.) generally has an interactive alignment process to allow the user to make adjustments to compensate for printer error, paper loading error, etc.
Related
I've tried to find an answer for this on MSDN, but I'm not getting a clear picture of how this is intended to work. All of my work is on Windows 8.1.
Here is my issue. I am working on a Laptop with a high resolution monitor, 3200x1800. I've been using EnumDisplayMonitors to get the bounding rectangle of my screen.
This seems to work fine if my display settings are default. But I've noticed that when I change the Window display settings to provide larger text, the resolution returned by EnumDisplayMonitor changes. Rather than getting 3200x1800 I will get 2133x1200.
I'm guessing since I asked for larger text, Windows chooses to represent the screen as a smaller resolution.
It seems that if I look at the virtual screen properties, everything is represented in the actual coordinates of my screen, i.e. 3200x1800. But the APIs for getting the window and monitor rectangles seem to operate on this "other" coordinate space.
Is there any documentation/Windows APIs to handle the conversion between these "other coordinates" and the "virtual coordinates"? i.e. if I want EnumDisplayMonitor or GetMonitorInfo to give me the true screen coordinates, how could I convert 2133x1200 to 3200x1800?
You have increased the DPI of the video adapter to 150% (144 dots per inch) to keep text readable and avoid having windows the size of a postage stamp. Quite necessary on such high resolution displays. But you haven't told Windows that your program knows how to deal with it.
So it assumes your program is an old one that was never designed to run on such monitors. It helps and lies to you. It gets your program to render its output to a memory buffer, then takes that output, rescales it by 150% and copies it to the video adapter. This is something you can see, text looks fuzzier if you put your program's output next to a program that doesn't ask for this kind of scaling, like Notepad.
And of course, it lies to you when you ask for the size of the screen. It tells you that it is 150% smaller than it really is. So that, after rescaling, a window you create will fill the screen.
Which is all just fine but of course not ideal, your program doesn't look as good as it should. You have to tell Windows that you know how to deal with the higher resolution. Do beware that this looks easier than it is in practice. Getting text to look crisp is trivial, it is bitmaps that are problematic. And in general a fertile source of bugs, even the big companies can get this wrong.
Before I start with an answer, let me ask: what are you really trying to do ? Or more specific - why do you need to know the monitor resolution ? The standard way to do this is to call GetWindowRect(GetDesktopWindow(), &rect) I'm not sure if the screen coordinates change based on DPI settings - but you should try that instead of GetMonitorInfo as the latter is for more advanced stuff. And if GetWindowRect still returns back a scaled rect, just call DPtoLP, LPtoDP or other mapping coordinate function as appropriate.
When you adjust the display settings as you described, you are actually changing the DPI settings of the screen. As such, certain APIs go into compatibility mode so that they allow the app to create larger elements and windows without knowing anything about this setting.
Why do you need to know the actual screen resolution since most of the windowing APIs will behave accordingly when the DPI scaling changes?
I suspect you could call SetProcessDPIAware or the manifest file equivalent. But do read this MSDN article first to understand DPI scaling.
My question is
Gui libraries like Qt and lets say for Windows operating systems
how do they create all those graphical user interfaces(windows etc).
Does each operating system gives API's or something else to do so?If yes, then how operating systems draw all those windows and things.Do they (operating systems) "control" the screen and then draw each pixel one by one to achieve their goal the GUI?
I would like an answer that explains things at the lowest level possible but well i don't demand someone to write me everything that happens( even if i would like to) because i know many things are behind all these.So for this reason comments with links or suggested books which explain with details
on what is happening under the hood would be appreciated.
Stackoverflow answers are not supposed to use links, comments can but not answers.
Each operating system and gui library is different, but, yes in some way, shape, or form they do actually draw every one of the pixels. It is often quite organized and many peformance solutions are used, optimized routines that can update a rectangle or some chunk of a screen, sometimes hardware gets involved (these days a lot of the time the hardware or basically gpus get involved the cpu asks the gpu to draw something then the gpus are busy placing all the pixels).
You would likely for example want to create some sort of font rendering function that is given the font, the string to display, the font size, and perhaps a clipping window to not go outside, or perhaps a function that with the font, size and string returns the number of pixels then you can adjust the string to fit and wrap (look around this web page for example, drag the window wider and narrower and watch what web text does).
Definitely some sort of image drawing routines with the ability to stretch or fit the drawing to the rectangle defined.
The fun stuff, games, etc has improved so rapidly over time that it is hard to go back to a simple line draw and area fill routine, etc. But also along with the technology the games brought simple things like web pages benefit...Again look around.
There are many open source programs and libraries you should just wander around the source code and see what you see.
The operating system provides libraries that interface with the monitor/display. In short, GUI libraries such as Qt interact with those libraries of the operating system and creates an easier bridge for you, the programmer to interact with the monitor. For instance, Qt might have a drawLine feature, which underneath is taking care of pixel arrangement related to drawing on the monitor/display for the operating system.
What happens during a display mode change (resolution, depth) on an ordinary computer? (classical stationarys and laptops)
It might not be so trivial since video cards are so different, but one thing is common to all of them:
The screen goes black (understandable since the signal is turned off)
It takes many seconds for the signal to return with the new mode
and if it is under D3D or GL:
The graphics device is lost and all VRAM objects must be reloaded, making the mode change take even longer
Can someone explain the underlying nature of this, and specifically why a display mode change is not a trivial reallocation of the backbuffer(s) and takes such a "long" time?
The only thing that actually changes are the settings of the so called RAMDAC (a Digital Analog Converter directly attached to the video RAM), well today with digital connections it's more like a RAMTX (a DVI/HDMI/DisplayPort Transmitter attached to the video RAM). DOS graphics programmer veterans probably remember the fights between the RAMDAC, the specification and the woes of one's own code.
It actually doesn't take seconds until the signal returns. This is a rather quick process, but most display devices take their time to synchronize with the new signal parameters. Actually with well written drivers the change happens almost immediately, between vertical blanks. A few years ago, when the displays were, errr, stupider and analogue, after changing the video mode settings, one could see the picture going berserk for a short moment, until the display resynchronized (maybe I should take a video of this, while I still own equipment capable of this).
Since what actually is going on is just a change of RAMDAC settings there's also not neccesary data lost as long as the basic parameters stays the same: Number of Bits per Pixel, number of components per pixel and pixel stride. And in fact OpenGL contexts usually don't loose their data with an video mode change. Of course visible framebuffer layouts change, but that happens also when moving the window around.
DirectX Graphics is a bit of different story, though. There is device exclusive access and whenever switching between Direct3D fullscreen mode and regular desktop mode all graphics objects are swapped, so that's the reason for DirectX Graphics being so laggy when switching from/to a game to the Windows desktop.
If the pixel data format changes it usually requires a full reinitialization of the visible framebuffer, but today GPUs are exceptionally good in maping heterogenous pixel formats into a target framebuffer, so no delays neccesary there, too.
How do people make GUIs? I mean the basic building block or principle they used to draw visual components on the screen like KDE, Gnome, etc. Are there any simple examples about how to draw something like a rectangle on the screen by directly dealing with the hardware?
I am using a PC for those who are asking about my platform.
Well okay, let's start at the bottom. You have a monitor that displays an image. This image is a matrix of pixels, say, 1600x1200 pixels with 24 bits depth.
The monitor knows what to display from the video adapter. The video adapter knows what to display through the "frame buffer", which is a big block of memory that - in this example - contains 1600 * 1200 pixels, usually with 32 bits per pixel in contemporary cards.
The frame buffer is often accessible to the CPU as a big block and memory that it can poke into directly, and some adapters have GPUs that allow for things like rendering stuff into the frame buffer, like shaded textured triangles, so the CPU just sends commands through a "command buffer", telling it what to draw and where.
Then you have the operating system, which loads a hardware driver that communicates with the video adapter.
The operating system usually offers functions to write to the frame buffer using functions. Win32 for example has lots of functions like BitBlt, Line, Text, etc. These will end up talking to the driver.
Then you have something like Java, that renders its own graphics, typically using functions provided by the operating system.
The simple answer is bitmaps, in fact this would also apply to fonts on terminals in the early days.
The original GUI's, things like Xerox Parc's Alto GUI were based on bitmap displays, and the graphics were drawn with simple bitmap drawing tools and graphics libraries, using simple geometry to determine shapes like circles, squares, rectangles etc, and then map them to display pixels.
Today's GUI are the same, except with additional software and hardware that have sped up and improved the process, and performance of these GUIs.
The fundamental mapping of bits e.g. 10101010 to pixels is dependent on the display hardware, but at a simplistic level, you would provide a display buffer in memory and simply populate it's bytes with the display data.
So for a basic monochrome bitmap, you'd draw it by providing bits that represented the shape you want to draw, you can either position these bits, like this, a simple 8x8pix button.
01111110
10000001
10000001
10111101
10111101
10000001
10000001
01111110
Which you can see easier if I render it with # and SPACE instead of 1 and 0.
######
# #
# #
# #### #
# #### #
# #
# #
######
Which as a bitmap image would look like this : http://i.stack.imgur.com/i7lVQ.png (I know it's a bit small :) but this is the sort of scale we would've begun at, when GUI's were first developed.)
If you had a more complex (e.g. 24 bit color display, you'd provide each pixel using a 24bit number.)
Obviously some bitmaps cannot be drawn manually (for example the border of a window), like we've done above, this is where geometry comes in handy, and we can use simple functions to determine the pixel values required to draw a rectangle, or any other simple shape, and then build from there.
Once you are able to draw graphics in this way on a display, you then hook a drawing loop onto a system interrupt to keep the display up to date (you redraw the display very often, depending on your system performance.) This way you can make it handle interaction from user devices, e.g. a mouse.
Back in the early days, even before Xerox Parc / Alto there were a number of early computer systems which had Vector based displays, these would make up an image by drawing lines on a CRT representation of a cartesian plane. However, these displays never saw mainstream use, except perhaps in some early video games, like Asteroids and Tempest.
You probably need a graphics library such as, for example, OpenGL.
For direct hardware interaction, you probably need to do something like assembly, which is completely computer specific.
If you are willing to look through a lot of source code, you might look at Mesa 3D, an open source implementation of the OpenGL specification.
Im creating pdfs server side with lots of graphics so maximizing real estate is a must but at the same time ensuring users printers can handle the tight margins is a must.
Does anyone have an idea what safe values I can use for the margins when authoring the pdfs. In the past Ive used work and home printers with margins of about one cm with no problems but of course I can't take this as the defacto minimum.
Oh and I don't really want to allow the user to specify the margin (50% lazyness 50% will get complicated.)
Ive googled but couldn't find anything concrete. (average minimum margin printing)
Every printer is different but 0.25" (6.35 mm) is a safe bet.
For every PostScript printer, one part of its driver is an ASCII file called PostScript Printer Description (PPD). PPDs are used in the CUPS printing system on Linux and Mac OS X as well even for non-PostScript printers.
Every PPD MUST, according to the PPD specification written by Adobe, contain definitions of a *ImageableArea (that's a PPD keyword) for each and every media sizes it can handle. That value is given for example as *ImageableArea Folio/8,25x13: "12 12 583 923" for one printer in this office here, and *ImageableArea Folio/8,25x13: "0 0 595 935" for the one sitting in the next room.
These figures mean "Lower left corner is at (12|12), upper right corner is at (583|923)" (where these figures are measured in points; 72pt == 1inch). Can you see that the first printer does print with a margin of 1/6 inch? -- Can you also see that the next one can even print borderless?
What you need to know is this: Even if the printer can do very small margins physically, if the PPD *ImageableArea is set to a wider margin, the print data generated by the driver and sent to the printer will be clipped according to the PPD setting -- not by the printer itself.
These days more and more models appear on the market which can indeed print edge-to-edge. This is especially true for office laser printers. (Don't know about devices for the home use market.) Sometimes you have to enable that borderless mode with a separate switch in the driver settings, sometimes also on the device itself (front panel, or web interface).
Older models, for example HP's, define in their PPDs their margines quite generously, just to be on the supposedly "safe side". Very often HP used 1/3, 1/2 inch or more (like "24 24 588 768" for Letter format). I remember having hacked HP PPDs and tuned them down to "6 6 606 786" (1/12 inch) before the physical boundaries of the device kicked in and enforced a real clipping of the page image.
Now, PCL and other language printers are not that much different in their margin capabilities from PostScript models.
But of course, when it comes to printing of PDF docs, here you can nearly always choose "print to fit" or similarly named options. Even for a file that itself does not use any margins. That "fit" is what the PDF viewer reads from the driver, and the viewer then scales down the page to the *ImageableArea.
As a general rule of thumb, I use 1 cm margins when producing pdfs. I work in the geospatial industry and produce pdf maps that reference a specific geographic scale. Therefore, I do not have the option to 'fit document to printable area,' because this would make the reference scale inaccurate. You must also realize that when you fit to printable area, you are fitting your already existing margins inside the printer margins, so you end up with double margins. Make your margins the right size and your documents will print perfectly. Many modern printers can print with margins less than 3 mm, so 1 cm as a general rule should be sufficient. However, if it is a high profile job, get the specs of the printer you will be printing with and ensure that your margins are adequate. All you need is the brand and model number and you can find spec sheets through a google search.
The margins vary depending on the printer. In Windows GDI, you call the following functions to get the built-in margins, the "no-print zone":
GetDeviceCaps(hdc, PHYSICALWIDTH);
GetDeviceCaps(hdc, PHYSICALHEIGHT);
GetDeviceCaps(hdc, PHYSICALOFFSETX);
GetDeviceCaps(hdc, PHYSICALOFFSETY);
Printing right to the edge is called a "bleed" in the printing industry. The only laser printer I ever knew to print right to the edge was the Xerox 9700: 120 ppm, $500K in 1980.
You shouldn't need to let the users specify the margin on your website - Let them do it on their computer. Print dialogs usually (Adobe and Preview, at least) give you an option to scale and center the output on the printable area of the page:
Adobe
Preview
Of course, this assumes that you have computer literate users, which may or may not be the case.