How does bitmaps draw pixel on window? - windows

So I followed a few tutorials on how to draw on window using windows.h library, the part of the tutorial that I don't really understand is the bitmap part. They used createbitmap() and StrechBit() functions to draw on window. Do window references bitmap to draw pixels on the screen accordingly and bitmap is basically a chunk of memory large enough to store pixel's position and color value. If so, does bitmap automatically generate every time you created a window, because it seems that you don't really need to declare bitmap or use createbitmap() functions to type word on the window you created, you only need to create bitmap when you want to draw a custom pixel.

A window will receive the WM_PAINT message when it needs to be painted. This can happen because InvalidateRect was called, the window was resized etc.
Where the pixels are stored ("in" the HWND) is an implementation detail you don't have to worry about. On some versions/configurations the GDI functions are hardware accelerated and the result might be stored directly in the GPU, in others everything might be implemented in software and run on the CPU. When using a layered window I'm guessing everything older than Vista will use an internal bitmap to store the pixels.
GDI/GDI+ is the classic way to draw windows. If you need per-pixel alpha transparency you would draw to a bitmap and call UpdateLayeredWindow, otherwise you would just draw using any GDI function you want in WM_PAINT. This might include drawing one or several bitmaps, text, and lines/curves directly to the HWNDs HDC. As this can cause flicker in certain cases (if any area is drawn to more than once in one paint cycle), so people might draw to their own bitmap first and then BitBlt this bitmap to the window, this is called double-buffering.
The new way to draw is Direct2d/DirectComposition.

Related

StretchBlt a bitmap without overwriting what's already drawn at the destination

I have a drawing application that draws many lines, polygons etc. on a device context. I'm also drawing background bitmaps that come from an external source and take a long time to load.
When drawing a new frame I first start threads that load the bitmaps, then draw my vector data, and at the end would like to draw the loaded bitmaps while preserving the vector data. I need the bitmaps "under" the vector data but I can't draw them first because they're not loaded, and waiting for them to load would slow things down a lot.
My idea was to apply the "bitmap with transparency" technique:
Copy the portion of my device context that would be covered by
the bitmap into a monochrome image, anything drawn with the
background color should be drawn on, everything else is off-limits.
Copy the image over the bitmap (with the proper ROP code) to mark
what needs to be transparent in the bitmap
StretchBlt the modified bitmap onto the device context
The bitmap needs to be transformed to fit properly on my device context, so I use SetWorldTransform to apply an affine transformation to my device context. The transformation has both a rotation and a shear.
Unfortunately, this fails at step 1 because, as per the documentation of StretchBlt:
If the source transformation has a rotation or shear, an error occurs.
Now, I did try setting the inverse transformation on my monochrome DC, which would transform my sheared data into a proper rectangle, but the function still fails.
So I guess my question is: how do I bitblit a raster image without deleting my data (a function where I give a transparent color in the destination, not the source, would be perfect), OR is there an easy way to extract the color data from a device context that has a rotate and shear transform on it?

DwmEnableBlurBehindWindow makes the entire client area transparent

Aero glass causes alot of people problems trying to draw on it. Anything with an alpha value of 255 seems to be treated as transparent with DWM using an additive blur to draw it. I want a part of client area to use Aero glass with the rest of it treated as opaque, so I don't have to deal with the headache of common controls not rendering properly.
MSDN lists a function DwmEnableBlurBehindWindow which lets you mark part of the client area as blurred by DWM. It takes a DWM_BLURBEHIND pointer which which has an HRGN handle to the region of the window. When I use this function, the entire window becomes transparent with an additive blend, but only the region of the window I passed to DwmEnableBlurBehindWindow gets blurred. Is there a way I can keep the rest of the window from becoming transparent?
What I have looks a bit like:
blur.dwFlags = DWM_BB_ENABLE | DWM_BB_BLURREGION;
blur.hRgnBlur = CreateRectRgn(0, 0, 90, 90);
blur.fEnable = true;
DwmEnableBlurBehindWindow(hwnd, &blur);
RECT rect;
GetClientArea(&rect);
FillRect(hdc, &rect, CreateSolidBrush(0));
From the MSDN Library article:
The alpha values in the window are honored and the rendering atop the blur will use these alpha values. It is the applications responsiblity for ensuring that the alpha values of all pixels in the window are correct. Some GDI operations do not perserve alpha values so care must be taken when presenting child windows as the alpha values they contribute are unpredicitable.
Make that most GDI operations, like FillRect(). The brush you created is drawn with 24-bit colors, the alpha will be 0. Which makes the window transparent. You'll need to switch to, say, GDI+. Text is particularly troublesome. As well as legacy Windows controls, like EDIT and LISTBOX which draw with GDI.

How can I improve CGContextFillRect and CGContextDrawImage performance

Those two functions are currently my bottleneck. I am working with very large bitmaps.
How can I improve their performance?
You could cache smaller versions of your bitmaps which you create before drawing the first time and then simply draw the downscaled samples instead of the full-blown 15 megapixel stuff.
Then again make sure you are only drawing what is necessary i.e. in 'drawRect: (NSRect) rect' only draw inside the rect (unless absolutely necessary). And try not do perform drawings outside of that method.
If you're drawing large background images with content in the foreground that moves, consider using a layer-backed NSView, adding a layer and setting its background image. You can then draw your content in a other layers (or layer-backed NSViews) above the background layer, and the view will never need to redraw the background image because it is stored in the GPU's texture memory. Your current image is too large for a single CALayer (CALayers are limited to the maximum OpenGL texture size of 2048 x 2048) so you will probably need to break it up into tiles.
Otherwise, as #iolo mentioned, you should make sure that you only redraw the parts of the view that really need updating.

Transparency to text in GDI

i have created a Bitmap using GDI+.I am drawing text on to that bitmap using GDI Drawtext.Using Drawtext i am unable to apply tranparency.
Any help or code will be appreciated.
If you want to draw text without a background fill, SetBkMode(hdc,TRANSPARENT) will tell GDI to leave the background when drawing text.
To actually render the foreground color of the text with alpha... is going to be more complicated. GDI does not actually support alpha channels all that widely in its APIs. Outside of AlphaBlend actually all it does is preserve the channel. Its actually not valid to set the upper bits of a COLOREF to alpha values as the high byte is actually used for flags to indicate whether the COLOREF is (rather than an RGB value) a palette entry.
So, unfortunately, your only real way forward is to:
Create a 32bit DIBSection. (CreateDIBSection). This gives you an HBITMAP that is guaranteed to be able to hold alpha information. If you create a bitmap via one of the other bitmap creation functions its going to be at the device colordepth that might not be 32bpp.
DrawText onto the DIBSection.
When you created the DIBSection you got a pointer to the actual memory. At this point you need to go through the memory and set the alpha values. I don't think that DrawText is going to do anything to the alpha channel by itself at all. Im thinking a simple check of the RGB components of each DWORD of the bitmap - if theyre the forground color, rewrite the DWORD with a 50% (or whatever) alpha in the alpha byte, if theyre the background color, rewrite with a 100% alpha in the alpha byte. *
AlphaBlend the bitmap onto the final destination. AlphaBlend requires the alpha channel in the source to be pre-multiplied.
*
It might be sufficient to simply memset the DIBSection with a 50% alpha before doing the DrawText, and ensure that the BKColor is black. I don't know what DrawText might do to the alpha channel though. Some experimentation is called for.
SIMPLE and EASY solution:)
Had this problem, i tried to change alpha values and premultiply, but there was another problem - antialiased and cleartype fonts where not shown correctly (ugly edges). So what i did...
Compose your scene (bitmaps, graphics, etc.)
BitBlt required rectangle from this scene (same as your text rectangle, from the place where you want your text to be) to memory DC with compatible bitmap selected at 0,0 destination coordinates
Draw Your text to that rectangle in memory DC.
Now AlphaBlend that rectangle without AC_SRC_ALPHA in the BLENDFUNCTION and with desired SourceConstantAlpha from this memory DC back to your scene DC.
I think You got it :)
Hmmmm - trying to do same here - wondering - I see that when you create a dib section youi specify the "masks" that is a R,G,B (and alpha) mask.
IF and thats a big if it really does not alter the alpha chhannel, then you might specify the mask differently for two bitmap headers. ONe specifies thr RGB in the proper places, the other makes them all have their bits assigned to the alpha channel. (set the text color to white in this case) then render in two passes, one to load the color values, the other to load the alpha values.
???? anyway just musing :)
While this question is about making text semi-transparent, I had the opposite problem.
DrawText was making the text in my layered window (UpdateLayeredWindow) semi-transparent ... and I didn't want it to be.
Take a look at this other question ... since in the other question I post some code that you could easily modify ... and is almost exactly what Chris Becke suggests in his answer.
A limited answer for a specific situation:
If you have a graphic with alpha channel and you want to draw opaque text over a locally opaque background, first prepare your 32 bit bitmap with 32 bit brushes created with CreateDIBPatternBrushPt. Then scan through the bitmap bits inverting the alpha channel, draw your text as you normally would (including SetBkMode to TRANSPARENT), then invert the alpha in the bitmap again. You can skip the first inversion if you invert the alpha of your brushes.

How do I create a bitmap with an alpha channel on the fly using GDI?

I am using layered windows and drawing a rounded rectangle on the screen. However, I'd like to smooth out the jagged edges. I think that I'll need alpha blending for this. Is there a way I can do this with GDI?
CreateDIBSection. Fill in the BITMAPINFOHEADER with 32bpp. Fill in the alpha channel with pre-multiplied alpha and youre good to go.
AlphaBlend is the API to actually blit 32 bpp bitmaps with an aplha channel.
You can do this in C# using the LockBits method of the BitMap class (see this question for an explanation). You definitely don't want to use GetPixel and SetPixel for this, as they are hideously slow (even though you'd just be manipulating the edges and not the entire bitmap).
Any chance of using GDI+ instead of GDI? It supports antialiasing and transparency right out of the box.
There isn't an easy way to do such drawing with just GDI calls. What you want isn't just alpha blending: you want anti-aliasing. That usually involves drawing what you want at a larger resolution and then scaling down.
What I've done in the past for similar problems is to use an art program to draw whatever shape I want (e.g. a rounded corner) much larger than I needed it in black and white. When I wanted to draw it I would scale the black and white bitmap to whatever size I wanted (using a variant of a scaling class from Code Project). This gives me a grayscale image that I can use as an alpha channel, which I'd then use for alpha blending, either by calling the Win32 function AlphaBlend, or by using a DIBSection and manually changing the appropriate pixels.
Another variation of this approach would be to allocate a DIBSection about four times larger than you wanted the final result, draw into that, and then use the above scaling class to scale it down: the scaling from a larger image will give the appropriate smoothing effect.
If all this sounds like quite a lot of work: well, it is.
EDIT: To answer the title of this question: you can create a bitmap with an alpha channel by calling CreateDIBSection. That won't on its own do what you want though, I think.

Resources