What is the format of Xlib ZPixmap format - xlib

does anyone know what is XLib ZPixmap format for 32/24bit images/pixmaps. Is it RGB(A) or BGR(A), etc. or highly implementation dependent?

Pixmaps have no color, period. They're just arrays of pixel values. The only drawables in X that have a color interpretation are Windows (channels defined by Visuals), Pictures (channels defined by Picture Formats), and GLX drawables (channels defined by either a GLX visual or an fbconfig). Any "color" interpretation you want to put on a pixmap is a function of where those pixels are going to end up (or, where they came from). If you're going to put them into a Window, look at the Visual of the Window to discover what the color channel masks are.
[edit: added GLX drawable info]

Related

How to calculate size of Windows bitmap font using FreeType

The Problem
I am loading the classic serife.fon file from Microsoft Windows using FreeType.
Here is how I set the size:
FT_Set_Pixel_Sizes(face, 0, fontHeight);
I use 0 for the fontWidth so that it will be auto-calculated based on the height.
How do I find the correct value for fontHeight such that the resulting font will be exactly 9 pixels tall?
Notes
Using trial and error, I know that the correct value is 32 - but I don't understand why.
I am not sure how relevant this is for bitmap fonts, but according to the docs:
pixel_size = point_size * resolution / 72
Substituting in the values:
point_size = 32
resolution = 96 (from FT_Get_WinFNT_Header)
gives:
pixel_size = 42.6666666
This is a long way from our target height of 9!
The docs do go on to say:
pixel_size computed in the above formula does not directly relate to the size of characters on the screen. It simply is the size of the EM square if it was to be displayed. Each font designer is free to place its glyphs as it pleases him within the square.
But again, I am not sure if this is relevant for bitmap fonts.
fon files are exe files with a fnt payload, where the fnt payload can be a vector or raster font. If this is a raster font (which is most likely) then the dfPixHeight value in the fnt header will tell you what size it's meant to be, which is exposed by FreeType2 as the pixel_height field of the FT_WinFNT_Header.
(And of course, note that using any size other than "the actual raster-size of the FNT" is going to lead to hilarious headaches because bitmap scaling is the kind of madness that's so bad, OpenType instead went with "just embed as many bitmaps as you need, at however many sizes you need, because that's the only way your bitmaps are going to look good")
The FNT-specific FT2 documentation can be found over on https://www.freetype.org/freetype2/docs/reference/ft2-winfnt_fonts.html but you may need to read it in conjunction with https://jeffpar.github.io/kbarchive/kb/065/Q65123 (or https://web.archive.org/web/20120215123301/http://support.microsoft.com/kb/65123) to find any further mappings that you might need between names/fields as defined in the FNT spec and FT2's naming conventions.

What does this bit of MSDN documentation mean?

The first parameter to the EnumFontFamiliesEx function, according to the MSDN documentation, is described as:
hdc [in]
A handle to the device context from which to enumerate the fonts.
What exactly does it mean?
What does device context mean?
Why should a device context be related to fonts?
Question (3) is a legitimately difficult thing to find an explanation for, but the reason is simple enough:
Some devices provide their own font support. For example, a PostScript printer will allow you to use PostScript fonts. But those same fonts won't be usable when rendering on-screen, or to another printer without PostScript support. Another example would be that a plotter (which is a motorized pen) requires vector fonts with a fixed stroke thickness, so raster fonts can't be used with such a device.
If you're interested in device-specific font support, you'll want to know about the GetDeviceCaps function.
The windows API uses the concept of handles extensively. A handle is an integer value that you can use as a token to access an API resource. You can think of it as a kind of "this" pointer, although it is definitely not a pointer.
A device context is an object within the windows API that represents a something that you can draw on or display graphics on. It might be a printer, a bitmap, or a screen, or some other context in which creating graphics makes sense. In Windows, fonts must be selected into device contexts before they can be used. In order to find out what fonts are currently available in any given device context, you can enumerate them. That's where EnumFontFamiliesEx comes in.
Microsoft has other articles on device context,
https://learn.microsoft.com/en-us/windows/win32/gdi/about-device-contexts
An application must inform GDI to load a particular device driver and,
once the driver is loaded, to prepare the device for drawing
operations (such as selecting a line color and width, a brush pattern
and color, a font typeface, a clipping region, and so on). These tasks
are accomplished by creating and maintaining a device context (DC). A
DC is a structure that defines a set of graphic objects and their
associated attributes, and the graphic modes that affect output. The
graphic objects include a pen for line drawing, a brush for painting
and filling, a bitmap for copying or scrolling parts of the screen, a
palette for defining the set of available colors, a region for
clipping and other operations, and a path for painting and drawing
operations. Unlike most of the structures, an application never has
direct access to the DC; instead, it operates on the structure
indirectly by calling various functions.
Obviously font is a kind of drawing.

Picture Transparency

How do I make a picture transparent in VB6.0 so that when I add image and put the picture, the background will show behind it?
From Rod Stephen's excellent VB Helper site (particularly good on graphics in VB6):
HowTo: Overlay one image on another with a transparent color by using PSet
Description from the site:
This program simply loops through the pixels in the images. For each
image in top-to-bottom order, the program looks for a color other than
the one defined as transparent. When it finds such a color, it stops
looking at the images and sets the output pixel's color using PSet.
Note that there are faster ways to access color values in V 6 and VB
.NET, and that there are faster methods for merging images if you have
an overlay mask. Note also that VB .NET provides tools for setting a
transparent color for an image so this problem is trivial in VB .NET.
i think you should use an ocx and dll library to fix it
You can use 3rd party .ocx files to get that effect.checkout this link http://www.vbforums.com/showthread.php?636390-vb6-Transparent-PictureBox

WP7 XNA: How to change size or style of SpriteFont fonts dynamically in code?

There seems no way to change font size or style in code, right?? It seems the only way is to duplicate the font files and load them all when program starts??
Thanks
SpriteFonts convert a font, with style, size, and other parameters, to a pixel-based format for use as a texture within XNA. Those pixels are static, so yes, there is no way to change them, short of looping through per pixel.
However there is scaling (though it won't look so great scaling larger) to help with size adjustments needed, plus you could, like you said, create multiple SpriteFont files from the same base font for different styles, and dynamically choose one of those sprite font "textures" within your code.
Beyond that, for true fully dynamic runtime usage, you'd need to essentially create these sprite font textures on the fly, in memory. This means you'd have to do what the SpriteFont Content Pipeline project does but at runtime instead. This is possible in WinForms, but as far as I know not really an option for WP7 as you apparently are using.

Why are colors displayed differently on Windows / OS X?

We are having an application ported from Windows to Mac OS, and colors are being displayed differently on both platforms. Here's an example:
In this case, we're telling the application to use green 0,140,0 and blue 25,0,75. On windows, this works great (top image). On the Mac, apparently OS X decides to "reinterpret" the colors and displays them differently (bottom image).
Is there something we can do to tell the operating system to stop being creative with our color definitions? It's going to be hard to make things look good on both platforms if the mac arbitrarily changes our color definitions by ~10%.
Edit: Here's an example of the code we're using to set the color for the blue used above:
m_colour = CGColorCreateGenericRGB(25 / 255.0, //r
0 / 255.0, //g
75 / 255.0, //b
1.0); //a
Thanks.
The Mac uses a complex colourspace system called ColorSync to ensure colours appear identical on different devices. As a result, colours may sometimes be shifted slightly in the RGB space so that they appear to the eye identical on properly calibrated displays, printers, etc.
If you show us the code you use to generate that shade of green, we can show you how to modify it to avoid this colour correction. However, unless there's a pressing reason why you want to avoid it, it's usually better to let it happen, as you do not have a wide range of display models to test against.
Edit: CGColorCreateGenericRGB() creates a colour in the generic RGB colour space, so it's going to end up shifting slightly depending on your display calibration. Unfortunately for you, it is no longer possible (as of Mac OS X 10.4) to create an instance of CGColor that is device-dependent (and therefore not subject to calibration.) You can, however, create a CGColor in the colour space of the target drawing context--that will tell Quartz that no conversion is necessary.
If you've created the context yourself, you should keep a reference to the colour space you used (of the type CGColorSpaceRef.) If it's at the Cocoa level (such as the context created by -[NSImage lockFocus] or by -[NSView drawRect:] then you should use the relevant NSColor APIs instead of the CGColor APIs (i.e. +[NSColor colorWithDeviceRed:green:blue:alpha:].)
If you must use Quartz drawing, you can call CGContextSetRenderingIntent() to tell the context how you want to have colours converted, but there is no guarantee that a conversion will not take place.

Resources