Im creating pdfs server side with lots of graphics so maximizing real estate is a must but at the same time ensuring users printers can handle the tight margins is a must.
Does anyone have an idea what safe values I can use for the margins when authoring the pdfs. In the past Ive used work and home printers with margins of about one cm with no problems but of course I can't take this as the defacto minimum.
Oh and I don't really want to allow the user to specify the margin (50% lazyness 50% will get complicated.)
Ive googled but couldn't find anything concrete. (average minimum margin printing)
Every printer is different but 0.25" (6.35 mm) is a safe bet.
For every PostScript printer, one part of its driver is an ASCII file called PostScript Printer Description (PPD). PPDs are used in the CUPS printing system on Linux and Mac OS X as well even for non-PostScript printers.
Every PPD MUST, according to the PPD specification written by Adobe, contain definitions of a *ImageableArea (that's a PPD keyword) for each and every media sizes it can handle. That value is given for example as *ImageableArea Folio/8,25x13: "12 12 583 923" for one printer in this office here, and *ImageableArea Folio/8,25x13: "0 0 595 935" for the one sitting in the next room.
These figures mean "Lower left corner is at (12|12), upper right corner is at (583|923)" (where these figures are measured in points; 72pt == 1inch). Can you see that the first printer does print with a margin of 1/6 inch? -- Can you also see that the next one can even print borderless?
What you need to know is this: Even if the printer can do very small margins physically, if the PPD *ImageableArea is set to a wider margin, the print data generated by the driver and sent to the printer will be clipped according to the PPD setting -- not by the printer itself.
These days more and more models appear on the market which can indeed print edge-to-edge. This is especially true for office laser printers. (Don't know about devices for the home use market.) Sometimes you have to enable that borderless mode with a separate switch in the driver settings, sometimes also on the device itself (front panel, or web interface).
Older models, for example HP's, define in their PPDs their margines quite generously, just to be on the supposedly "safe side". Very often HP used 1/3, 1/2 inch or more (like "24 24 588 768" for Letter format). I remember having hacked HP PPDs and tuned them down to "6 6 606 786" (1/12 inch) before the physical boundaries of the device kicked in and enforced a real clipping of the page image.
Now, PCL and other language printers are not that much different in their margin capabilities from PostScript models.
But of course, when it comes to printing of PDF docs, here you can nearly always choose "print to fit" or similarly named options. Even for a file that itself does not use any margins. That "fit" is what the PDF viewer reads from the driver, and the viewer then scales down the page to the *ImageableArea.
As a general rule of thumb, I use 1 cm margins when producing pdfs. I work in the geospatial industry and produce pdf maps that reference a specific geographic scale. Therefore, I do not have the option to 'fit document to printable area,' because this would make the reference scale inaccurate. You must also realize that when you fit to printable area, you are fitting your already existing margins inside the printer margins, so you end up with double margins. Make your margins the right size and your documents will print perfectly. Many modern printers can print with margins less than 3 mm, so 1 cm as a general rule should be sufficient. However, if it is a high profile job, get the specs of the printer you will be printing with and ensure that your margins are adequate. All you need is the brand and model number and you can find spec sheets through a google search.
The margins vary depending on the printer. In Windows GDI, you call the following functions to get the built-in margins, the "no-print zone":
GetDeviceCaps(hdc, PHYSICALWIDTH);
GetDeviceCaps(hdc, PHYSICALHEIGHT);
GetDeviceCaps(hdc, PHYSICALOFFSETX);
GetDeviceCaps(hdc, PHYSICALOFFSETY);
Printing right to the edge is called a "bleed" in the printing industry. The only laser printer I ever knew to print right to the edge was the Xerox 9700: 120 ppm, $500K in 1980.
You shouldn't need to let the users specify the margin on your website - Let them do it on their computer. Print dialogs usually (Adobe and Preview, at least) give you an option to scale and center the output on the printable area of the page:
Adobe
Preview
Of course, this assumes that you have computer literate users, which may or may not be the case.
Related
I'm getting wrong values in some printers.
For example, dc.GetDeviceCaps(PHYSICALOFFSETX) returns 42 in some printer and LOGPIXELSX is 360, so the left margin should be 2.96 millimeters, but actually test shows that is 5 millimeters !
PD: PHYSICALOFFSETY works fine!
It depends on the printer and the driver, and possibly on how you load the paper. For example, on a lot of tractor-feed (e.g., dot-matrix) printers, there's a lot of horizontal play, and it's up to the user to load the paper correctly.
The other problem I've seen is that some printer drivers forget to swap the reported horizontal and vertical offsets and resolutions when you switch the page orientation (landscape/portrait) in the middle of a job. But that's pretty easy to detect and correct for.
Software that's supposed to print data in pre-drawn boxes on forms (e.g., invoices, checks, etc.) generally has an interactive alignment process to allow the user to make adjustments to compensate for printer error, paper loading error, etc.
Todays displays have a quite huge range in size and resolution. For example, my 34.5cm × 19.5cm display (resulting in a diagonal of 39.6cm or 15.6") has 1366 × 768 pixels, whereas the MacBook Pro (3rd generation) with a 15" diagonal has 2880×1800 pixels.
Multiple people complained that everything is too small with such high resolution displays (see example). That is simple to explain when developers use pixels to define their GUI. For "traditional displays", this is not a big problem as the pixels might have about the same size on most monitors. But on the new monitors with much higher pixel density the pixels are simply smaller.
So how can / should user interface developers deal with that problem? Is it possible to get the physical size of the screen? Is it possible to set physical sizes instead of pixel-based ones? Is that still a problem (it's been a while since I last read about it) or was that fixed meanwhile?
(While css seems to support cm, when I try here it, it is not the set size).
how can / should user interface developers deal with that problem?
Use a toolkit or framework that support resolution independence. WPF is built from the ground up to be resolution-independent, but even old framework like Windows Forms can learn new tricks. OSX/iOS and Windows (or browser if we're talking about web) itself may try to take care the problem by automatic scaling, but if there's bitmap graphic involved, developers might need to provide different bitmaps such in Android (which face most varying resolution and densities compared to other OS)
Is it possible to get the physical size of the screen?
No, and developers shouldn't care about it. Developers should only care about the class of the device (say, different UI for tablet and smartphone), and perhaps the DPI to decide which bitmap resource to use. Vector resource and font should be scaled by the framework.
Is that still a problem (it's been a while since I last read about it) or was that fixed meanwhile?
Depend on when you last read about it. Windows support is still spotty, even for the internal apps itself, and while anyone developing in WPF or UWP have it easy, don't expect major third party apps to join soon. OSX display scaling seems to work a bit better, while modern mobile OS are either running on limited range of resolution (iOS and Windows Phone) or handle every resolution imaginable quite nicely (Android)
There are a few ways to deal with different screen sizes, for example when I make mobile apps in java, I either use DIP(Density Independent Pixels; They stay at a fixed size) or make objects occupy a percentage of the screen with simple math. As for web development, you can use VW and VH (Viewport Width and Viewport Height), by adding these to the end of a value instead of px, the objects take up a percentage of the viewport. For example 100vh takes 100% of the viewport height. Then what I think is the best way to do it, but time consuming, is to use a library like Bootstrap that automatically resizes elements, even when the window is resized. W3Schools has a good tutorial on bootstrap and more detailed explainations on any of these options can be looked up with an easy google search.
The design of the GUI in today display diversity era is real challenge. I would suggest several hints, mainly about the GUI applications design:
Never set or expect constant pixel size of the text - the user can change it from the system settings of the OS. Use some real-world measures for the text and check its pixel size when drawing. Provide some way to put the random size text in the boundaries of the window.
Never set or expect constant pixel size of the GUI widgets. Try to position them on the window in some adaptive way - according to the size of the window. Most GUI widget toolkits today have such instruments.
Never set or expect constant pixel size dialog windows. Let the OS to choose the size for you and then use what you get (X). Or, if you need to set some size and position (Windows), define it as a percent of the screen size.
If possible use scalable image formats for the icons. SVG is great for icons actually. Using sets of bitmap icons with different sizes is acceptable, but highly non-optimal as memory use and still will not provide perfect scaling in most cases.
My question is about how font handling needs to be changed in order to work correctly under Windows 7. I'm sure that I've made an assumption about something that was valid before, but is no longer valid. But I don't even know where to begin looking! I'm praying someone can help! Here are the details as I understand them (I've also posted this question on a Microsoft Windows Developers forum, but they're not answering):
Yes, I'm behind the times (heck, I still write WIN32 code in plain C!) I have a 10 yr old DLL I wrote that mimics an even older DOS screen I/O library within the client area of a window. Needless to say, it only allows the use of fixed-width fonts. When some of the programs using the DLL have been moved to Windows 7, there is a strange flickering that appears when a fixed-width TRUE TYPE font is used (bitmap fonts still work perfectly.) We've tracked the problem down to the fact that a single character written with ExtTextOut is wider than it should be. I've checked the measurements three different ways (by using GetTextExtentPoint32 on a 132 character string and dividing by 132, by calling GetTextMetrics and even by using GetCharABCWidths for all 256 characters) and they all agree that the font is the same width. But ExtTextOut is rendering the background rectangle one or two pixels wider than the font width. Either than, or it is beginning the background rendering a pixel or two to the left of the position given in the parameters [I call it like this: ExtTextOut( hdc, r.left, r.top, ETO_OPAQUE, &r, &ch, 1, NULL ).] And remember, this EXACT code worked perfectly under Windows 2000, Windows XP and, with bitmap fonts on Windows 7 -- but it no longer works correctly with fixed-width true type fonts under Windows 7.
For anyone who isn't grasping what I need to do: try to imagine writing one character per square on a piece of graph paper. Every square uses the same font, but may have a different foreground and/or background color. I use TA_TOP|TA_LEFT text alignment, because it is the simplest and any consistently applied alignment should work for a fixed-width font.
What I'm seeing is that ExtTextOut is emitting a larger background rectangle than I've specified in the RECT * parameter. Since the rectangle I'm providing is created from the reported size of the font, this should NEVER happen -- and it never happened on Windows XP and earlier, and doesn't happen with bitmap (i.e. .FON) fonts under Windows 7, either. But it ALWAYS happens with fixed-width TrueType fonts under Windows 7. This is with the EXACT SAME EXECUTABLE running on Windows 2000, Windows XP and Windows 7 (32 & 64.) While I would love to simply say Windows 7 has a bug, I'm more inclined to believe that some fundamental assumption that I've made about font handling under Windows is no longer true (after 20 years of writing software for Windows.)
But I have no idea how or where to discover what that might be! Please, PLEASE help me!
--- ammendment ---
For anyone interested, I've managed to work around what I am considering a bug -- until I find documentation to the contrary. My workaround consists of two changes to my library:
Use the size returned from GetTextExtentPoint32() of an 'X' instead
of data from TEXTMETRICS.
Include the ETO_CLIPPING flag in all ExtTextOut() calls.
Previously, I was using tmHeight+tmExternalLeading for the number of pixels between the tops of consecutive rows of text, as is documented. I discovered that the size.cy value coming back from the GetTextExtentPoint32() wasn't the same and seemed more accurate. The worst example I found was the OCRB true type font. Here's what I saw in the debugger for the OCRB font I'd created (using the system font selection dialog):
ocrbtm.tmHeight = 11
ocrbtm.tmExternalLeading = 7
ocrbsize.cy = 11
So, for some reason that I've yet to discover, Windows is ignoring the external leading value defined for the OCRB font. Using the size value instead of the TM results in nice, neat, close packed text, which is just what I wanted.
The ETO_CLIPPING flag should not be necessary for me because I am setting the rectangle to exactly the dimensions of a single character and using ETO_OPAQUE to fill in the background (and overwrite the previous cell content.) But without the clipping flag, a single character is wider than either the size, text metric, or ABC width would indicate -- at least, that is true based on all of the documentation I've found so far.
I believe that HEIGHT issue has existed for a long time, but the rest was unnecessary until we ran our software under Windows 7. I'm appending this to my question to see if anyone can explain what I obviously don't understand.
-- ammendment 2 --
1: All documentation I can find says that tmHeight+tmExternalLeading should produce single spaced lines of text. Period. But that is not always true and I cannot find documentation indicating how Windows determines the different values that are sometimes returned by GetTextExtentPoint32().
2: under Win7 (maybe Vista) ExtTextOut started filling in a little more background than it should (by adding a couple extra pixels to the right), but only when a true type font is selected. It does this even if the rectangle is double the expected size of the character (in BOTH dimensions.) DPI/Scaling might be a factor, but since my system is set to 100%, it would seem that Windows is having trouble with a 1:1 scaling factor and that would seem to be a bug. The fact that it only affects true type and not bitmap (.FON) fonts also seems to rule out scaling (unless there is a bug in the scaling system), since Windows should attempt to scale all of the text, not just some of it. Also, there's a greyed (but checked) setting "Use Windows XP style DPI scaling" in the "Custom DPI Setting" dialog. Lastly, this entire issue may be a result of my running under the Windows Classic theme instead of one of the Aero or other Win7 native themes.
-- ammendment 3 --
Simply calling SetProcessDPIAware() has no effect on the issue I'm having. Since my problem exists at the 100% DPI setting (scale 1:1), if my problem is DPI-related, then I must have discovered a bug in the DPI virtualization because this is how Microsoft describes the feature:
This feature works by providing "virtualized" system metrics and UI elements to the application, as if it were running at 96 DPI. The application then renders to a 96-DPI off-screen surface, and the Desktop Windows Manager scales the resulting application window to match the DPI setting.
All of my settings show that I'm at 100% scaling, and looking in the custom settings box clearly shows that means 96 DPI. So, if the DPI virtualization from 96 DPI to 96 DPI is not working for my fixed-width true type fonts, then Windows has a problem, right? Or is there some function I need to call (or stop calling?) in order allow the DPI virtualizer to work correctly?
I'm still not convinced that the supposed scaling issue actually has as much to do with the font SIZE as I originally thought. That's because the problem is manifesting in the background rectangle being filled by ExtTextOut() instead of the text character being emitted. The background rectangle gets enlarged a bit when the font is true type. I've also now verified that this problem occurs whether using the Windows Classic theme or the standard Windows Aero theme. Now to build a simplified example so others can experiment with it.
-- ammendment 4 --
I've created a minimal demo program that shows what I'm seeing (and what I'm doing.) The Visual Studio 2010 project/source may be downloaded from http://www.svalli.com/files/fwtt.7z -- I intentionally didn't include executables because I don't want to risk spreading malware. The program has you choose a fixed-width font and then writes two 5x5 character grids to the client area, one created using the GetTextExtentPoint32 size and one using the TEXTMETRIC size as documented by Microsoft. The grids are in a black&white checkerboard pattern with a yellow on red character written last into the center to show the overlap effect (you may need a zoom utility to see it clearly.) The program also draws a string that starts with 5 X's just below the grid, starting at the same left offset, to be used as a comparison for my method of placing individual characters (I match the string.) The menu allows toggling clipping on/off in ExtTextOut and selection of other fonts. There is also a command line option dpiaware (case-sensitive) that causes the program to call SetProcessDPIAware() when it starts up, so that the effect of that call may also be evaluated.
From creating this I've learned that ExtTextOut is filling the correct background rectangle, but the character being rendered with an opaque background may be wider than it should be and may not even begin where ExtTextOut was told to begin drawing! I said "should be" because the character spacing I'm ending up with matches what I get when I have ExtTextOut render a whole string. The overlap may apparently be on either or both sides of the given rectangle, for example, OCRB adds an extra pixel to both the left and right sides of the character cell while the other true type fonts I've checked add two pixels to the right edge.
I really want to do this the "right" way, but I cannot find any documentation that shows what I'm doing wrong or am missing. Well, I am probably missing something for DPI Aware at scales other than 100%, but otherwise, I'm just baffled.
-- ammendment 5 --
Slightly less baffled... the problem is caused by ClearType. Turning off ClearType made all of the fonts work again. Turning ON ClearType under XP causes the same problem. Apparently ClearType can silently (until someone tells me how to detect it) stretch characters horizontally by a couple pixels in order to make space for the shaded pixels it adds to smooth things out.
Is clipping the only way around this problem?
-- ammendment 6 --
Partial answer to my clipping question above: When creating a new font I now do the following (in pseudo code):
CreateFontIndirect
SelectFont
GetTextMetrics
if( (tmPitchAndFamily & TMPF_TRUETYPE) && Win6.x or above )
if( SystemParametersInfo( SPI_GETCLEARTYPE ) )
lfQuality = NONANTIALIASED_QUALITY
DeleteObject( font )
CreateFontIndirect
Without enabling clipping this almost always works with the font sizes I'm using, though I've found a few that still render an extra pixel to the right (or left) of the character cell. Luckily, those appear to be free fonts found on the internet, so their overall quality might be below the standards of professional font foundries.
If anyone can find a better answer, I'd really, REALLY love to hear it! Until then, I think this is as good as it will get. Thanks for reading this far!
Make sure your code is high DPI aware, and then tell the OS that your process is DPI aware.
If you don't tell the OS that you're DPI aware, some of the measurement functions will lie and give you numbers based on the assumption that the display DPI is actually 96 dpi regardless of what it really is. Meanwhile, the drawing functions will try to scale in the other direction. For simple high-level drawing, this approach generally works (though it often leads to fuzzy text). For small measurements and precise placement of individual characters, this often results in round off problems that lead to things like inconsistent font sizes. This behavior was introduced in Windows Vista.
You can see it all the time in Visual Studio 2010+ as the syntax highlighter colors the text and words shift by a couple pixels here and there as you type. Really frickin' annoying.
Regarding the amendment:
tmExternalLeading is simply a recommendation from the font designer as to how much extra space to put between lines of text. MSDN documentation typically says, "the amount of extra leading (space) that the application adds between rows." Well, you're the application, so the "Right Thing To Do" is to add it between rows when you're drawing text yourself, but it really is up to you. (I suspect higher level functions like DrawText will use it.
It is perfectly correct for GetTextExtentPoint32 (and friends) to return a size.cy equal to tmHeight and to ignore tmExternalLeading. As the programmer, it's ultimately your choice how much leading to actually use.
You can see that this with some simply drawing code. Select a font with a non-zero tmExternalLeading (Arial works for me). Draw some text using TextOut and a unique background color. Then measure the text with GetTextExtentPoint32 and draw some lines based on the values you get back. You'll see that the background color rectangle excludes the external leading. External leading is just that: external. It's not in the bounds of the character cell.
// Draw the sample text with an opaque background.
assert(::GetMapMode(ps.hdc) == MM_TEXT);
assert(::GetBkMode(ps.hdc) == OPAQUE);
assert(::GetTextAlign(ps.hdc) == TA_TOP);
COLORREF rgbOld = ::SetBkColor(ps.hdc, RGB(0xC0, 0xFF, 0xC0));
::TextOutW(ps.hdc, x, y, pszText, cchText);
::SetBkColor(ps.hdc, rgbOld);
// This vertical line at the right side of the text shows that opaque
// background is exactly the height returned by GetTextExtentPoint32.
SIZE size = {0};
if (::GetTextExtentPoint32W(ps.hdc, pszText, cchText, &size)) {
::MoveToEx(ps.hdc, x + size.cx, y, NULL);
::LineTo(ps.hdc, x + size.cx, y + size.cy);
}
// These horizontal lines show the normal line spacing, taking into
// account tmExternalLeading.
assert(tm.tmExternalLeading > 0); // ensure it's an interesting case
::MoveToEx(ps.hdc, x, y, NULL);
::LineTo(ps.hdc, x + size.cx, y); // top of this line
const int yNext = y + tm.tmHeight + tm.tmExternalLeading;
::MoveToEx(ps.hdc, x, yNext, NULL);
::LineTo(ps.hdc, x + size.cx, yNext); // top of next line
The gap between the bottom of the colored rectangle and the top of the next line represents the external leading, which is always outside the character cell.
OCR-B is designed for reliable optical character recognition in banking equipment. Having a large external leading (relative to the height of the actual text) may be appropriate for some OCR applications. For this particular font, it's probably not an aesthetic choice.
I'm making a program that will have a widget that has to be fixed in size, is there an industry standard for smallest resolution width?
What are some common way of dealing with this problem?
On traditional PCs (i.e. no mobile, no "custom", no specialized hardware) You usually will not find a display with a resultion below 640x800x256, so that is the "technical" de-facto standard.
However, if you try to design for that resolution, your controls will look ugly and uneconomically designed, wasting lots of available space on real-world platforms.
I'd say 800x600x16 is an absolute minimum requirement. Even windows save mode usually is able to come up with (or can be switched to) 800x600. So I usually design resizeable apps for 800x600, and if done right, they look and behave great under even the largest resolutions. In contrast, if you design a resizeable app for 640x480, you will make an awful lot of compromises in layouting etc. due to the limited space available, and that while "nobody" uses that resolution in the real world.
Furthermore, I love applications that resize intelligently. Depending on your GUI framework/toolkit, that is a requirement that you can be met easily, or not-so-easily. It's worth the hassle, though.
You might also consider the font scaling setting. On large-resolution displays, many users prefer the "large fonts" setting, or something else different from the original font scaling setting. Then, your app must scale accordingly, and the minimum resolution criterium gets less important, while the apps's ability to re-size intelligently gains much more significance.
In short:
a) Design for 800x600x16
a.1) Let your app terminate with an error message if the resolution is smaller than that
b) Make sure all resizeable dialogs resize intelligently
c) Test all layouts on large and small font scaling settings as well
d) When saying "800x600", this is useless, since your app usually cannot use the whole screen, even if maximized. (We are not talking about fullscreen apps, do we?) So you should account for the task bar and possibly other fixed screen elements that cannot be used by a normal Window, and for the window's title bar when maximized. You will want the window to fit into the desktop area in all cases. (Well, maybe you will.) Windows can tell you the dimensions of that area, taking account all task bar etc. stuff that the user might happen to use, so you could alert/abort if the usable space is smaller than your minimum resolution that you designed for.
For PCs (excluding embedded stuff like handphones, wristwatches, mp3 players, washing machines etc..) the smallest resolution is 640x480 otherwise known as VGA resolution.
There may be some PC-class computers like early Macs, Ataris or TRS-80s with smaller resolutions but nobody uses them nowdays. Conventional wisdom says the smallest monitor width is 640 pixels wide.
In the last 10 years a lot of developers have upped the assumed minimum resolution to 1024x768 otherwise known as XGA (btw, nobody calls them VGA or XGA anymore since the mid 1990s). All graphics card manufactured since 1999 can handle at least 1024 pixels as the minimum width.
768 pixels used to be assumed as the minimum height by a lot of developers in the last 10 years until 3 years ago when Asus invented the Netbook category. Most netbooks have a resolution of 1024x600. So a lot of software cannot fit on netbook screens (much to the annoyance of netbook owners).
Currently (since I'm one of those netbook owners) my own standard minimum is 1024x600, that is, 1024 pixels wide vs 600 pixels high (actually more like 560 pixels because I usually have to account for the menubar and the taskbar).
Note: wikipedia has a nice summary of standard monitor resolutions: http://en.wikipedia.org/wiki/Graphic_display_resolutions
How do people make GUIs? I mean the basic building block or principle they used to draw visual components on the screen like KDE, Gnome, etc. Are there any simple examples about how to draw something like a rectangle on the screen by directly dealing with the hardware?
I am using a PC for those who are asking about my platform.
Well okay, let's start at the bottom. You have a monitor that displays an image. This image is a matrix of pixels, say, 1600x1200 pixels with 24 bits depth.
The monitor knows what to display from the video adapter. The video adapter knows what to display through the "frame buffer", which is a big block of memory that - in this example - contains 1600 * 1200 pixels, usually with 32 bits per pixel in contemporary cards.
The frame buffer is often accessible to the CPU as a big block and memory that it can poke into directly, and some adapters have GPUs that allow for things like rendering stuff into the frame buffer, like shaded textured triangles, so the CPU just sends commands through a "command buffer", telling it what to draw and where.
Then you have the operating system, which loads a hardware driver that communicates with the video adapter.
The operating system usually offers functions to write to the frame buffer using functions. Win32 for example has lots of functions like BitBlt, Line, Text, etc. These will end up talking to the driver.
Then you have something like Java, that renders its own graphics, typically using functions provided by the operating system.
The simple answer is bitmaps, in fact this would also apply to fonts on terminals in the early days.
The original GUI's, things like Xerox Parc's Alto GUI were based on bitmap displays, and the graphics were drawn with simple bitmap drawing tools and graphics libraries, using simple geometry to determine shapes like circles, squares, rectangles etc, and then map them to display pixels.
Today's GUI are the same, except with additional software and hardware that have sped up and improved the process, and performance of these GUIs.
The fundamental mapping of bits e.g. 10101010 to pixels is dependent on the display hardware, but at a simplistic level, you would provide a display buffer in memory and simply populate it's bytes with the display data.
So for a basic monochrome bitmap, you'd draw it by providing bits that represented the shape you want to draw, you can either position these bits, like this, a simple 8x8pix button.
01111110
10000001
10000001
10111101
10111101
10000001
10000001
01111110
Which you can see easier if I render it with # and SPACE instead of 1 and 0.
######
# #
# #
# #### #
# #### #
# #
# #
######
Which as a bitmap image would look like this : http://i.stack.imgur.com/i7lVQ.png (I know it's a bit small :) but this is the sort of scale we would've begun at, when GUI's were first developed.)
If you had a more complex (e.g. 24 bit color display, you'd provide each pixel using a 24bit number.)
Obviously some bitmaps cannot be drawn manually (for example the border of a window), like we've done above, this is where geometry comes in handy, and we can use simple functions to determine the pixel values required to draw a rectangle, or any other simple shape, and then build from there.
Once you are able to draw graphics in this way on a display, you then hook a drawing loop onto a system interrupt to keep the display up to date (you redraw the display very often, depending on your system performance.) This way you can make it handle interaction from user devices, e.g. a mouse.
Back in the early days, even before Xerox Parc / Alto there were a number of early computer systems which had Vector based displays, these would make up an image by drawing lines on a CRT representation of a cartesian plane. However, these displays never saw mainstream use, except perhaps in some early video games, like Asteroids and Tempest.
You probably need a graphics library such as, for example, OpenGL.
For direct hardware interaction, you probably need to do something like assembly, which is completely computer specific.
If you are willing to look through a lot of source code, you might look at Mesa 3D, an open source implementation of the OpenGL specification.