Saving Bitmaps on Windows 10 with GDI scaling active - winapi

I have a MFC application with toolbars (using CMFCToolbar). I create the toolbar bitmap on the fly using bitmaps from files and resources. The DIBs have different color formats.
So I create an empty bitmap toolbar image compatible to screen DC.
Then I open all the bitmaps and blit the content to the toolbar bitmap (GDI does colorspace conversion and stretching for me).
Then I save the bitmap to a 24-bit DIB file.
Then I create the toolbar object and load the image.
That has worked for ages and is working now except for one case:
Recently we had to enable GDI scaling for Windows 10 1703 and later.
On a system with high resolution display and 200% scaling (like Surface) the following effect occurs:
All toolbar icons are distorted.
I also found the reason:
When saving the composed image I only get the top-left quarter of the image.
Width and height of the bitmap did not change (say 1024x15) compared to normal resolution display without GDI scaling. But that bitmap only contains the pixels of the top-left quarter (see example below).
So I assume the device context tells Windows about 200% scaling. When blitting from source to target the image gets scaled up automatically but the dimension of the bitmap does not change.
How can I save the unscaled bitmap?
-or-
How can I correctly save the scaled bitmap? Where to get the missing pixels? Where to get proper dimensions? (HBITMAP refers only to unscaled dimensions).
Example:
no GDI scaling, correct:
200% scaling, same dimensions, but only top-left quarter of the correct image:

Summary and solution:
Let's say we create a memory bitmap compatible to screen format (DDB):
CBitmap toolBitmap;
toolBitmap.CreateCompatibleBitmap (pDC, 1000, 20);
Later we blit something into the memory bitmap (does not matter here). Now we want to save the bitmap (write as DIB to file).
Although we know the dimensions (here: 1000x20) we should not use them. Because on Window 10 and process has GDI scaling activated and high resolution display with scaling is used - the dimensions might have changed internally. Thus the bitmap is not 1000x20 anymore.
This one fails:
BITMAP bmHdr;
toolBitmap.GetObject(sizeof(BITMAP), &bmHdr);
The Bitmap header contains the original dimensions (1000x20). Using them for saving to file results in incomplete image. Only upper left part will be stored.
This one works - we can retrieve scaled dimensions:
BITMAPINFO bi = {};
bi.dwSize = sizeof(bi);
int result = GetDIBits(pDC->GetSafeHdc(), (HBITMAP)toolBitmap.GetSafeHandle(), 0, 0, NULL, &bi, DIB_RGB_COLORS);
Now we can proceed with new dimensions.
I ended up using GDI+ functions, which also save the complete (scaled) bitmap:
Gdiplus::Bitmap bm((HBITMAP)toolBitmap.GetSafeHandle(), NULL);
Gdiplus::Status status = bm.Save(pwszFileName, &clsidEncoder, NULL);
I presume there are tons of old MFC and GDI code which will not work correctly with activated GDI scaling on Windows 10.

Related

How does bitmaps draw pixel on window?

So I followed a few tutorials on how to draw on window using windows.h library, the part of the tutorial that I don't really understand is the bitmap part. They used createbitmap() and StrechBit() functions to draw on window. Do window references bitmap to draw pixels on the screen accordingly and bitmap is basically a chunk of memory large enough to store pixel's position and color value. If so, does bitmap automatically generate every time you created a window, because it seems that you don't really need to declare bitmap or use createbitmap() functions to type word on the window you created, you only need to create bitmap when you want to draw a custom pixel.
A window will receive the WM_PAINT message when it needs to be painted. This can happen because InvalidateRect was called, the window was resized etc.
Where the pixels are stored ("in" the HWND) is an implementation detail you don't have to worry about. On some versions/configurations the GDI functions are hardware accelerated and the result might be stored directly in the GPU, in others everything might be implemented in software and run on the CPU. When using a layered window I'm guessing everything older than Vista will use an internal bitmap to store the pixels.
GDI/GDI+ is the classic way to draw windows. If you need per-pixel alpha transparency you would draw to a bitmap and call UpdateLayeredWindow, otherwise you would just draw using any GDI function you want in WM_PAINT. This might include drawing one or several bitmaps, text, and lines/curves directly to the HWNDs HDC. As this can cause flicker in certain cases (if any area is drawn to more than once in one paint cycle), so people might draw to their own bitmap first and then BitBlt this bitmap to the window, this is called double-buffering.
The new way to draw is Direct2d/DirectComposition.

StretchBlt a bitmap without overwriting what's already drawn at the destination

I have a drawing application that draws many lines, polygons etc. on a device context. I'm also drawing background bitmaps that come from an external source and take a long time to load.
When drawing a new frame I first start threads that load the bitmaps, then draw my vector data, and at the end would like to draw the loaded bitmaps while preserving the vector data. I need the bitmaps "under" the vector data but I can't draw them first because they're not loaded, and waiting for them to load would slow things down a lot.
My idea was to apply the "bitmap with transparency" technique:
Copy the portion of my device context that would be covered by
the bitmap into a monochrome image, anything drawn with the
background color should be drawn on, everything else is off-limits.
Copy the image over the bitmap (with the proper ROP code) to mark
what needs to be transparent in the bitmap
StretchBlt the modified bitmap onto the device context
The bitmap needs to be transformed to fit properly on my device context, so I use SetWorldTransform to apply an affine transformation to my device context. The transformation has both a rotation and a shear.
Unfortunately, this fails at step 1 because, as per the documentation of StretchBlt:
If the source transformation has a rotation or shear, an error occurs.
Now, I did try setting the inverse transformation on my monochrome DC, which would transform my sheared data into a proper rectangle, but the function still fails.
So I guess my question is: how do I bitblit a raster image without deleting my data (a function where I give a transparent color in the destination, not the source, would be perfect), OR is there an easy way to extract the color data from a device context that has a rotate and shear transform on it?

SkiaSharp Text Size on Xamarin Forms

How does the TextSize property on an SKPaint object relate to the 'standard' Xamarin Forms FontSize?
In the image you can see the difference between size 40 on a label and as painted. What would I need to do to make them the same size?
As #hankide mentioned, it has to do with the fact that the native OS has scaling for UI elements so the app "looks the same size" on different devices.
This is great for buttons and all that as the OS is drawing them. So if the button is bigger, the OS just scales up the text. However, with SkiaSharp, we have no idea what you are drawing so we can't do any scaling. If we were to scale, the image would become blurry or pixelated on the high resolution screens.
One way to get everything the same size is to do a global scale before drawing anything:
var scale = canvasWidth / viewWidth;
canvas.Scale(scale);
And this is often good enough, but sometimes you really want to draw items differently on a high resolution screen. An example would be a tiled background. Instead of stretching the image on a bigger canvas, you may want to just tile it - preserving the pixels.
In the case of this question, you can either scale the entire canvas before drawing, or you can just scale the text:
var paint = new SKPaint {
TextSize = 40 * scale
};
This way, the text size is increased, but the rest of the drawing is on a larger canvas.
I have an example on GitHub: https://github.com/mattleibow/SkiaSharpXamarinFormsDemo
This compares Xamarin.Forms, SkiaSharp and Native labels. (They should all be exactly the same size)
I think that the problem is in the way Xamarin.Forms handles font sizes. For example on Android, you could define the font size in pixels (px), scale-independent pixels (sp), inches (in), millimeters and density-independent pixels (dp/dip).
I can't remember how Xamarin.Forms handles the sizes (px,sp or dp) but the difference you see here is because of that. What you could do, is create an Effect that changes the font size handling on the native control and try to match the sizing provided by SkiaSharp.

Core Text on Retina Macs

I use Core Text to draw text to an offscreen bitmap context using CTLineDraw(). The bitmap is then processed internally before it is drawn to my window.
The problem here is that bitmap contexts aren't scaled on Retina Macs. Thus, on a Retina Mac, the text is still drawn at 72dpi to the bitmap but it should be drawn in 144dpi of course, because the pixel density is twice as high. Thus, the text currently looks blurry because it is drawn at 72dpi to the offscreen bitmap and this bitmap is then scaled when it is drawn to the window.
What is the best way to make Core Text Retina-aware in this context? Should I simply pass a transformation matrix to CTFontCreateWithName() that contains the screen's backingScaleFactor in its scale coefficients? That does look a little hackish, though. That's why I'm asking for some feedback or a better idea...

Transparency to text in GDI

i have created a Bitmap using GDI+.I am drawing text on to that bitmap using GDI Drawtext.Using Drawtext i am unable to apply tranparency.
Any help or code will be appreciated.
If you want to draw text without a background fill, SetBkMode(hdc,TRANSPARENT) will tell GDI to leave the background when drawing text.
To actually render the foreground color of the text with alpha... is going to be more complicated. GDI does not actually support alpha channels all that widely in its APIs. Outside of AlphaBlend actually all it does is preserve the channel. Its actually not valid to set the upper bits of a COLOREF to alpha values as the high byte is actually used for flags to indicate whether the COLOREF is (rather than an RGB value) a palette entry.
So, unfortunately, your only real way forward is to:
Create a 32bit DIBSection. (CreateDIBSection). This gives you an HBITMAP that is guaranteed to be able to hold alpha information. If you create a bitmap via one of the other bitmap creation functions its going to be at the device colordepth that might not be 32bpp.
DrawText onto the DIBSection.
When you created the DIBSection you got a pointer to the actual memory. At this point you need to go through the memory and set the alpha values. I don't think that DrawText is going to do anything to the alpha channel by itself at all. Im thinking a simple check of the RGB components of each DWORD of the bitmap - if theyre the forground color, rewrite the DWORD with a 50% (or whatever) alpha in the alpha byte, if theyre the background color, rewrite with a 100% alpha in the alpha byte. *
AlphaBlend the bitmap onto the final destination. AlphaBlend requires the alpha channel in the source to be pre-multiplied.
*
It might be sufficient to simply memset the DIBSection with a 50% alpha before doing the DrawText, and ensure that the BKColor is black. I don't know what DrawText might do to the alpha channel though. Some experimentation is called for.
SIMPLE and EASY solution:)
Had this problem, i tried to change alpha values and premultiply, but there was another problem - antialiased and cleartype fonts where not shown correctly (ugly edges). So what i did...
Compose your scene (bitmaps, graphics, etc.)
BitBlt required rectangle from this scene (same as your text rectangle, from the place where you want your text to be) to memory DC with compatible bitmap selected at 0,0 destination coordinates
Draw Your text to that rectangle in memory DC.
Now AlphaBlend that rectangle without AC_SRC_ALPHA in the BLENDFUNCTION and with desired SourceConstantAlpha from this memory DC back to your scene DC.
I think You got it :)
Hmmmm - trying to do same here - wondering - I see that when you create a dib section youi specify the "masks" that is a R,G,B (and alpha) mask.
IF and thats a big if it really does not alter the alpha chhannel, then you might specify the mask differently for two bitmap headers. ONe specifies thr RGB in the proper places, the other makes them all have their bits assigned to the alpha channel. (set the text color to white in this case) then render in two passes, one to load the color values, the other to load the alpha values.
???? anyway just musing :)
While this question is about making text semi-transparent, I had the opposite problem.
DrawText was making the text in my layered window (UpdateLayeredWindow) semi-transparent ... and I didn't want it to be.
Take a look at this other question ... since in the other question I post some code that you could easily modify ... and is almost exactly what Chris Becke suggests in his answer.
A limited answer for a specific situation:
If you have a graphic with alpha channel and you want to draw opaque text over a locally opaque background, first prepare your 32 bit bitmap with 32 bit brushes created with CreateDIBPatternBrushPt. Then scan through the bitmap bits inverting the alpha channel, draw your text as you normally would (including SetBkMode to TRANSPARENT), then invert the alpha in the bitmap again. You can skip the first inversion if you invert the alpha of your brushes.

Resources