How to read GetDeviceCaps() return values? - windows

I am new to Windows API and I just can't seem to figure this out:
According to the documentation the function int GetDeviceCaps(HDC hdc,int index); returns integer values which correspond to the selected item I want to know about. However, how am I supposed to convert the integers to the values?
printf("Rastercaps: %d\n", GetDeviceCaps(hdc, RASTERCAPS));
// rastercaps: 32409
item RASTERCAPS:
values
RC_BANDING Requires banding support.
RC_BITBLT Capable of transferring bitmaps.
RC_BITMAP64 Capable of supporting bitmaps larger than 64 KB.
RC_DI_BITMAP Capable of supporting the SetDIBits and GetDIBits
functions.
RC_DIBTODEV Capable of supporting the SetDIBitsToDevice
function.
RC_FLOODFILL Capable of performing flood fills.
...
Does 32409 mean the device has RASTERCAP values (capabilities) 3,2,4,0 and 9, in the order as stated in their table?
Thank you.

They're bitmasks. In the relevant C header file (wingdi.h) there is
/* Raster Capabilities */
#define RC_NONE
#define RC_BITBLT 1 /* Can do standard BLT. */
#define RC_BANDING 2 /* Device requires banding support */
#define RC_SCALING 4 /* Device requires scaling support */
#define RC_BITMAP64 8 /* Device can support >64K bitmap */
...and many more.
The return value (32409) is made up of the bitwise-or of these values. So, for example, if you wanted to know if the device could support >64K bitmap you would do
int rc = GetDeviceCaps(hdc, RASTERCAPS);
if (rc & RC_BITMAP64) { /* it does support >64k */ }
So in this case, 32409 is 0111111010011001 in binary which means it has the capabilities RC_BITBLT | RC_BITMAP64 | RC_GDI20_OUTPUT | RC_DI_BITMAP |
RC_DIBTODEV | RC_BIGFONT | RC_STRETCHBLT | RC_FLOODFILL | RC_STRETCHDIB | RC_OP_DX_OUTPUT.
See "Bitwise operations in C"

Related

Explain fst_posmode, difference between F_PEOFPOSMODE and F_VOLPOSMODE?

Header file:
/* Position Modes (fst_posmode) for F_PREALLOCATE */
#define F_PEOFPOSMODE 3 /* Make it past all of the SEEK pos modes so that */
/* we can keep them in sync should we desire */
#define F_VOLPOSMODE 4 /* specify volume starting postion */
Man page:
The position modes (fst_posmode) for the F_PREALLOCATE command indicate how to use the offset field.
The modes are as follows:
F_PEOFPOSMODE Allocate from the physical end of file.
F_VOLPOSMODE Allocate from the volume offset.
The doc is hard to understand.

Given an HBITMAP, is there any way to find out, whether it contains an alpha channel?

In an attempt to improve this answer, I am looking for a way to determine, whether a bitmap referenced through an HBITMAP contains an alpha channel.
I understand, that I can call GetObject, and retrieve a BITMAP structure:
BITMAP bm = { 0 };
::GetObject(hbitmap, sizeof(bm), &bm);
But that only gets me the number of bits required to store the color of a pixel. It doesn't tell me, which bits are actually used, or how they relate to the individual channels. For example, a 16bpp bitmap could encode a 5-6-5 BGR image, or a 1-5-5-5 ABGR image. Likewise, a 32bpp bitmap could store ABGR or xBGR data.
I could take it a step further, and probe for a DIBSECTION instead (if available):
bool is_dib = false;
BITMAP bm = { 0 };
DIBSECTION ds = { 0 };
if ( sizeof(ds) == ::GetObject(hbitmap, sizeof(ds), &ds ) {
is_dib = true;
} else {
::GetObject(hbitmap, sizeof(bm), &bm );
}
While this can disambiguate the 16bpp case (using the dsBitfields member), it still fails to determine existence of an alpha channel in the case of a 32bpp image.
Is there any way to find out, whether a bitmap referenced through an HBITMAP contains an alpha channel (and which bit(s) are used for it), or is this information simply not available?
You cannot know definitively, but you can make a good educated guess if you're willing to iterate through the pixels..
(Ignoring the 16-bit color with 1-bit alpha channels, for now anyway.)
For there to be an alpha channel, it's necessary (but not sufficient) for the bitmap to be a DIB section and to have 32 bits per pixel. As noted in the question, you can check for those requirements.
We also know that Windows deals only with pre-multiplied alpha. That means, for every pixel A >= max(R, G, B). So, if you're willing to scan all the pixels, you can rule out a bunch of 24-bit images. If that condition holds for all pixels and if any of the A's are non-zero, you almost certainly have an alpha channel (or a corrupted image).
Basically, the only uncertainty left is the all-transparent image versus the all-black image, both of which consist of pixels with all channels set to zero. Perhaps it's sufficient to take an educated guess in that case. I would guess yes, since having a 32-bit DIB section is a pretty strong signal. If you had a 32-bit device-dependent bitmap, then it doesn't have alpha, and if you had a device-independent bitmap without alpha, you'd probably use 24 bits per pixel to save space.
Some of the more detailed bitmap info headers can tell you if bits are reserved for the alpha channel. For example, see BITMAPV5HEADER, which has a mask indicating which bits are the alpha channel (though the documentation says some contradictory things). Likewise for the BITMAPV4HEADER. Unfortunately, I don't think there's a way to get this version of the header from an HBITMAP. (And I'm certain there are alpha-enabled bitmap files out there that don't set these fields correctly.)
It's well-known that GDI doesn't handle alpha channels (with the exception of AlphaBlend, which doesn't take an HBITMAP but accesses one selected into a memory DC). There are user APIs, like UpdateLayeredWindow that work with images with alpha channels, but, like AlphaBlend, take the bitmap data from the information selected into a memory DC. LoadImage, if passed the correct flags, will preserve the alpha channel when loading a DIB to be accessed by HBITMAP.
ImageList_Add, which does take an HBITMAP, will preserve and alpha channel if the image list is created with the proper flags. In all these cases, however, the caller must know that the bitmap data contains proper alpha data and set the right flags for the API. This suggests that the information is not readily available from the bitmap handle.
If you have access to the bitmap resource or file that the image was loaded from, it is possible to see if the original header uses BI_BITFIELDS and has an alpha channel specified, but you can't get to that header from the HBITMAP in all cases. (And there's remains the concern that the header isn't filled out correctly.)
You can get it indirectly by using GetObject function. Tucked away in the documentation is a note:
If hgdiobj is a handle to a bitmap created by calling CreateDIBSection, and the specified buffer is large enough, the GetObject function returns a DIBSECTION structure. In addition, the bmBits member of the BITMAP structure contained within the DIBSECTION will contain a pointer to the bitmap's bit values.
If hgdiobj is a handle to a bitmap created by any other means, GetObject returns only the width, height, and color format information of the bitmap. You can obtain the bitmap's bit values by calling the GetDIBits or GetBitmapBits function.
(emphasis mine)
In other words: if you try to decode it is a DIBSECTION, but it's only a BITMAP, then
dibSection.BitmapInfoHeader
will not be updated. (e.g. left as zeros)
It's helpful to remember how a BITMAP and a DIBSECTION differ:
| BITMAP | DIBSECTION |
|------------------------|--------------------------|
| bmType: Longint | bmType: Longint |
| bmWidth: Longint | bmWidth: Longint |
| bmHeight: Longint | bmHeight: Longint |
| bmWidthBytes: Longint | bmWidthBytes: Longint |
| bmPlanes: Word | bmPlanes: Word |
| bmBitsPixel: Word | bmBitsPixel: Word |
| bmBits: Pointer | bmBits: Pointer |
| | |
| |BITMAPINFOHEADER | <-- will remain unchanged for BITMAPs
| | biSize: DWORD |
| | biWidth: Longint |
| | biHeight: Longint |
| | biPlanes: Word |
| | biBitCount: Word |
| | biCompression: DWORD |
| | biSizeImage: DWORD |
| | biXPelsPerMeter: Longint |
| | biYPelsPerMeter: Longint |
| | biClrUsed: DWORD |
| | biClrImportant: DWORD |
| | |
| | dsBitfields: DWORD[3] |
| | dshSection: HANDLE |
| | dsOffset: DWORD |
If you try to get a BITMAP as a DIBSECTION, the GetObject function will not fill in any fields of the BITMAPINFOHEADER.
So check if all those values are empty, and if so: you know it's not a DIBSECTION (and must be a BITMAP).
Sample
function IsDibSection(bmp: HBITMAP): Boolean
{
Result := True; //assume that it is a DIBSECTION.
var ds: DIBSECTION = Default(DIBSECTION); //initialize everything to zeros
var res: Integer;
//Try to decode hbitmap as a DIBSECTION
res := GetObject(bmp, sizeof(ds), ref ds);
if (res = 0)
ThrowLastWin32Error();
//If the bitmap actually was a BITMAP (and not a DIBSECTION),
//then BitmapInfoHeader values will remain zeros
if ((ds.Bmih.biSize = 0)
and (ds.Bmih.biWidth = 0)
and (ds.Bmih.biHeight= 0)
and (ds.Bmih.biPlanes= 0)
and (ds.Bmih.biBitCount= 0)
and (ds.Bmih.biCompression= 0)
and (ds.Bmih.biSizeImage= 0)
and (ds.Bmih.biXPelsPerMeter= 0)
and (ds.Bmih.biYPelsPerMeter= 0)
and (ds.Bmih.biClrUsed= 0)
and (ds.Bmih.biClrImportant= 0))
Result := False; //it's not a dibsection
}

How can I insert a single byte to be sent prior to an I2C data package?

I am developing an application in Atmel Studio 6 using the xMega32a4u. I'm using the TWI libraries provided by Atmel. Everything is going well for the most part.
Here is my issue: In order to update an OLED display I am using (SSD1306 controller, 128x32), the entire contents of the display RAM must be written immediately following the I2C START command, slave address, and control byte so the display knows to enter the data into the display RAM. If the control byte does not immediately precede the display RAM package, nothing works.
I am using a Saleae logic analyzer to verify that the bus is doing what it should.
Here is the function I am using to write the display:
void OLED_buffer(){ // Used to write contents of display buffer to OLED
uint8_t data_array[513];
data_array[0] = SSD1306_DATA_BYTE;
for (int i=0;i<512;++i){
data_array[i+1] = buffer[i];
}
OLED_command(SSD1306_SETLOWCOLUMN | 0x00);
OLED_command(SSD1306_SETHIGHCOLUMN | 0x00);
OLED_command(SSD1306_SETSTARTLINE | 0x00);
twi_package_t buffer_send = {
.chip = OLED_BUS_ADDRESS,
.buffer = data_array,
.length = 513
};
twi_master_write(&TWIC, &buffer_send);
}
Clearly, this is very inefficient as each call to this function recreates the entire array "buffer" into a new array "data_array," one element at a time. The point of this is to insert the control byte (SSD1306_DATA_BYTE = 0x40) into the array so that the entire "package" is sent at once, and the control byte is in the right place. I could make the original "buffer" array one element larger and add the control byte as the first element, to skip this process but that makes the size 513 rather than 512, and might mess with some of the text/graphical functions that manipulate this array and depend on it being the correct size.
Now, I thought I could write the code like this:
void OLED_buffer(){ // Used to write contents of display buffer to OLED
uint8_t data_byte = SSD1306_DATA_BYTE;
OLED_command(SSD1306_SETLOWCOLUMN | 0x00);
OLED_command(SSD1306_SETHIGHCOLUMN | 0x00);
OLED_command(SSD1306_SETSTARTLINE | 0x00);
twi_package_t data_control_byte = {
.chip = OLED_BUS_ADDRESS,
.buffer = data_byte,
.length = 1
};
twi_master_write(&TWIC, &data_control_byte);
twi_package_t buffer_send = {
.chip = OLED_BUS_ADDRESS,
.buffer = buffer,
.length = 512
};
twi_master_write(&TWIC, &buffer_send);
}
/*
That doesn't work. The first "twi_master_write" command sends a START, address, control, STOP. Then the next such command sends a START, address, data buffer, STOP. Because the control byte is missing from the latter transaction, this does not work. All I need is to insert a 0x40 byte between the address byte and the buffer array when it is sent over the I2C bus. twi_master_write is a function that is provided in the Atmel TWI libraries. I've tried to examine the libraries to figure out its inner workings, but I can't make sense of it.
Surely, instead of figuring out how to recreate a twi_write function to work the way I need, there is an easier way to add this preceding control byte? Ideally one that is not so wasteful of clock cycles as my first code example? Realistically the display still updates very fast, more than enough for my needs, but that does not change the fact this is inefficient code.
I appreciate any advice you all may have. Thanks in advance!
How about having buffer and data_array pointing to the same uint8_t[513] array, but with buffer starting at its second element. Then you can continue to use buffer as you do today, but also use data_array directly without first having to copy all the elements from buffer.
uint8_t data_array[513];
uint8_t *buffer = &data_array[1];

How does gcc get the alignment for each type on a specific platform?

Is it hard coded into gcc's source or fetched somehow programatically?
I think it is hardcoded in arch-specific folder, e.g. for sparc
http://www.google.com/codesearch#Yj7Hz1ZInUg/trunk/gcc-4.2.1/gcc/config/sparc/sparc.h
/* No data type wants to be aligned rounder than this. */
#define BIGGEST_ALIGNMENT (TARGET_ARCH64 ? 128 : 64)
/* The best alignment to use in cases where we have a choice. */
#define FASTEST_ALIGNMENT 64
...
/* Make strings word-aligned so strcpy from constants will be faster. */
#define CONSTANT_ALIGNMENT(EXP, ALIGN) \
((TREE_CODE (EXP) == STRING_CST \
&& (ALIGN) < FASTEST_ALIGNMENT) \
? FASTEST_ALIGNMENT : (ALIGN))
/* Make arrays of chars word-aligned for the same reasons. */
#define DATA_ALIGNMENT(TYPE, ALIGN) \
(TREE_CODE (TYPE) == ARRAY_TYPE \
&& TYPE_MODE (TREE_TYPE (TYPE)) == QImode \
&& (ALIGN) < FASTEST_ALIGNMENT ? FASTEST_ALIGNMENT : (ALIGN))
and sparc.c in the same folder.
Some basic alignment rules are defined in gcc/tree.c e.g. for void:
/* We are not going to have real types in C with less than byte alignment,
so we might as well not have any types that claim to have it. */
TYPE_ALIGN (void_type_node) = BITS_PER_UNIT;
TYPE_USER_ALIGN (void_type_node) = 0;
It will be compiled into gcc in the build process.
So, default aligns are compiled in, but can be changed by manipulating TREE type objects from the gcc code.
UPDATE: x86 config has better comments:
/* Minimum size in bits of the largest boundary to which any
and all fundamental data types supported by the hardware
might need to be aligned. No data type wants to be aligned
rounder than this.
Pentium+ prefers DFmode values to be aligned to 64 bit boundary
and Pentium Pro XFmode values at 128 bit boundaries. */
#define BIGGEST_ALIGNMENT 128
/* Decide whether a variable of mode MODE should be 128 bit aligned. */
#define ALIGN_MODE_128(MODE) \
((MODE) == XFmode || SSE_REG_MODE_P (MODE))
/* The published ABIs say that doubles should be aligned on word
boundaries, so lower the alignment for structure fields unless
-malign-double is set. */
/* ??? Blah -- this macro is used directly by libobjc. Since it
supports no vector modes, cut out the complexity and fall back
on BIGGEST_FIELD_ALIGNMENT. */
...
#define BIGGEST_FIELD_ALIGNMENT 32
...
/* If defined, a C expression to compute the alignment given to a
constant that is being placed in memory. EXP is the constant
and ALIGN is the alignment that the object would ordinarily have.
The value of this macro is used instead of that alignment to align
the object.
If this macro is not defined, then ALIGN is used.
The typical use of this macro is to increase alignment for string
constants to be word aligned so that `strcpy' calls that copy
constants can be done inline. */
#define CONSTANT_ALIGNMENT(EXP, ALIGN) ix86_constant_alignment ((EXP), (ALIGN))
/* If defined, a C expression to compute the alignment for a static
variable. TYPE is the data type, and ALIGN is the alignment that
the object would ordinarily have. The value of this macro is used
instead of that alignment to align the object.
If this macro is not defined, then ALIGN is used.
One use of this macro is to increase alignment of medium-size
data to make it all fit in fewer cache lines. Another is to
cause character arrays to be word-aligned so that `strcpy' calls
that copy constants to character arrays can be done inline. */
#define DATA_ALIGNMENT(TYPE, ALIGN) ix86_data_alignment ((TYPE), (ALIGN))
/* If defined, a C expression to compute the alignment for a local
variable. TYPE is the data type, and ALIGN is the alignment that
the object would ordinarily have. The value of this macro is used
instead of that alignment to align the object.
If this macro is not defined, then ALIGN is used.
One use of this macro is to increase alignment of medium-size
data to make it all fit in fewer cache lines. */
#define LOCAL_ALIGNMENT(TYPE, ALIGN) ix86_local_alignment ((TYPE), (ALIGN))
...
/* Set this nonzero if move instructions will actually fail to work
when given unaligned data. */
#define STRICT_ALIGNMENT 0
For arm, mips, sparc, and others archs (which limits unaligned access to memory) the alignment requires of any machine instruction may be recorded in arch.md file (e.g. in sparc.md)

How to get correct hDevMode values from CPrintDialogEx (PrintDlgEx)?

I'm displaying a CPrintDialogEx dialog to choose a printer and modify the settings. I set the hDevNames member so that a default printer will be selected, but I leave hDevMode set to NULL. On successful return I pull some values such as paper size out of the returned DEVMODE structure from hDevMode.
I'm having a problem because hDevMode appears to be initialized with the values from the default printer that I passed in, not the printer that was finally selected. How do I get the parameters from the actual selected printer?
As requested here's the relevant part of the code. I've deleted some of it in the interest of space. TOwnedHandle is a smart pointer I wrote for holding a memory handle and locking it automatically.
CPrintDialogEx dlg(PD_ALLPAGES | PD_NOCURRENTPAGE | PD_NOPAGENUMS | PD_NOSELECTION, this);
ASSERT(dlg.m_pdex.hDevMode == NULL);
ASSERT(dlg.m_pdex.hDevNames == NULL);
dlg.m_pdex.hDevNames = GlobalAlloc(GHND, sizeof(DEVNAMES) + iSizeName);
DEVNAMES * pDevNames = (DEVNAMES *) GlobalLock(dlg.m_pdex.hDevNames);
// ...
GlobalUnlock(dlg.m_pdex.hDevNames);
if ((dlg.DoModal() == S_OK) && (dlg.m_pdex.dwResultAction == PD_RESULT_PRINT))
{
TOwnedHandle<DEVMODE> pDevMode = dlg.m_pdex.hDevMode;
TRACE("Printer config = %dx%d %d\n", (int)pDevMode->dmPaperWidth, (int)pDevMode->dmPaperLength, (int)pDevMode->dmOrientation);
// ...
}
Edit: I've determined that I don't get the problem if I don't set the hDevNames parameter. I wonder if I've discovered a Windows bug? This is in XP, I don't have a more recent version of Windows handy to test with.
I've distilled the code into a test that doesn't use MFC, this is strictly a Windows API problem. This is the whole thing, nothing left out except the definition of pDefaultPrinter - but of course it doesn't do anything useful anymore.
PRINTDLGEX ex = {sizeof(PRINTDLGEX)};
ex.hwndOwner = m_hWnd;
ex.Flags = PD_ALLPAGES | PD_NOCURRENTPAGE | PD_NOPAGENUMS | PD_NOSELECTION;
ex.nStartPage = START_PAGE_GENERAL;
#if 1
int iSizeName = (strlen(pDefaultPrinter) + 1) * sizeof(char);
ex.hDevNames = GlobalAlloc(GHND, sizeof(DEVNAMES) + iSizeName);
DEVNAMES * pDevNames = (DEVNAMES *) GlobalLock(ex.hDevNames);
ASSERT(pDevNames != NULL);
pDevNames->wDeviceOffset = sizeof(DEVNAMES);
strcpy((char *)pDevNames + pDevNames->wDeviceOffset, pDefaultPrinter);
GlobalUnlock(ex.hDevNames);
#endif
HRESULT hr = PrintDlgEx(&ex);
if ((hr == S_OK) && (ex.dwResultAction == PD_RESULT_PRINT))
{
DEVMODE * pdm = (DEVMODE *) GlobalLock(ex.hDevMode);
ASSERT(pdm != NULL);
TRACE("Printer config = %dx%d %d\n", (int)pdm->dmPaperWidth, (int)pdm->dmPaperLength, (int)pdm->dmOrientation);
GlobalUnlock(ex.hDevMode);
DEVNAMES * pdn = (DEVNAMES *) GlobalLock(ex.hDevNames);
ASSERT(pdn != NULL);
TRACE(_T("Printer device = %s\n"), (char *)pdn + pdn->wDeviceOffset);
GlobalUnlock(ex.hDevNames);
}
If I can't get a fix, I'd love to hear of a work-around.
After much head scratching I think I've figured it out.
When the dialog comes up initially, the hDevMode member gets filled with the defaults for the printer that is initially selected. If you select a different printer before closing the dialog, that DEVMODE structure is presented to the new printer driver; if the paper size doesn't make sense to the driver it may change it, and the drivers are not consistent.
The reason this tripped me up is that I was switching between three printers: two label
printers with very different characteristics, and a laser printer with US Letter paper.
The laser printer always responds with the proper dimensions but may indicate a wrong paper size code.
The first label printer will override the size provided by the laser printer but not the other label printer.
The second label printer will accept the size provided by the first label printer, because it's capable of using that size even though it's not loaded and not configured. It modifies the size provided by the laser printer by returning the maximum width and the Letter size length of 11 inches.
I determined two ways to work around the problem. The first is to implement IPrintDialogCallback and respond to SelectionChange calls by reloading the default DEVMODE for the newly selected printer. EDIT: I tried this and it does not work. CPrintDialogEx already implements an IPrintDialogCallback interface, making this easy. It appears that PrintDlgEx has its own internal handle that it uses to track the current DEVMODE structure and only uses the one in the PRINTDLGEX structure for input/output. There's no way to affect the DEVMODE while the dialog is up, and by the time it returns it's too late.
The second solution is to ignore the returned results entirely and work from the default paper configuration for the printer. Any changes made from the printer defaults within the dialog are lost completely, but for my application this is acceptable.
bool MyDialog::GetPaperSize(const TCHAR * pPrinterName, double & dPaperWidth, double & dPaperLength)
{
// you need to open the printer before you can get its properties
HANDLE hPrinter;
if (OpenPrinter((TCHAR *)pPrinterName, &hPrinter, NULL))
{
// determine how much space is needed for the DEVMODE structure by the printer driver
int iDevModeSize = DocumentProperties(m_hWnd, hPrinter, (TCHAR *)pPrinterName, NULL, NULL, 0);
ASSERT(iDevModeSize >= sizeof(DEVMODE);
// allocate a DEVMODE structure and initialize it to a clean state
std::vector<char> buffer(iDevModeSize, 0);
DEVMODE * pdm = (DEVMODE *) &buffer[0];
pdm->dmSpecVersion = DM_SPECVERSION;
DocumentProperties(m_hWnd, hPrinter, (TCHAR *)pPrinterName, pdm, NULL, DM_OUT_BUFFER);
ClosePrinter(hPrinter);
// convert paper size from tenths of a mm to inches
dPaperWidth = pdm->dmPaperWidth / 254.;
dPaperLength = pdm->dmPaperLength / 254.;
return true;
}
return false;
}

Resources