I'm using GDI to draw text onto a device context, and I noticed that the kerning or character placement is different if the angle is exactly 0, 90, 180, or 270. As soon as I increase the angle by 1, the character placement differs noticeably.
Rather than creating an HFONT with the angle, I am using ModifyWorldTransform to transform the device context's world coordinates, and then I use TextOut to draw the text onto the device context.
I think that GDI is using font hints or some other special technique when the text is being drawn at exact multiples of 90 degrees, but not for any other angle.
Is there a way to disable this hinting, so that text rendered at 0 degrees does not differ significantly from text rendered at 1 degree?
Here's an example of what I mean (Monotype Corsiva font):
0 degrees:
1 degree:
For some fonts, such as Arial or Tahoma, it is not as noticeable, but I would like to get rid of the difference entirely, even if it means the text is not rendered as best it can.
I think this is due to anti-aliasing rather than font hints. You could try the following:
Disable (font) AA, but this will not yield acceptable results.
Create font handles for every possible angle and see if the problem persists. I assume it doesn't, but it's not a pretty solution.
Render the text to a bitmap (e.g. using CreateCompatibleBitmap() ) render the text to it and then render the rotated bitmap. This depends on how often you need different rotations / different text.
Play with fdwOutputPrecision and fdwQuality in CreateFont(). This could be the easiest solution, but you'd have to experiment a little bit I guess.
hth
Related
When reading source code which draws lines in Windows using GDI it is relatively common to see FillRect() being used despite the only purpose being to draw a line. But the end product of drawing a line with a width value and a filled rectangle are quite similar aren't they?
FillRect() is 1 function call, Using MoveToEx & LineTo requires 2
Which is more efficient when needing to draw a line, using FillRect() or MoveToEx() and LineTo()?
In the most common cases, FillRect will do the same thing as MoveToEx and LineTo for perfectly horizontal and vertical rectangles. Nowadays, there are so many layers of indirection between GDI and the screen that the performance difference is almost certainly not relevant.
Drawing operations in GDI typically depend on the current "state" of the device context (DC). Lines are drawn with whichever pen is currently selected into the DC. The pen determines the color, style (solid, dashed, etc.), thickness, end caps, etc.
FillRect, however, doesn't depend on much of the DC state. All drawing primitives depend on the mapping mode and clipping region, but, unlike lines, FillRect doesn't even depend on the selected brush, since you get to provide one right in the call.
Changing state (which objects are selected into the DC) can be a lot of work. If you know you want a horizontal line, 2 pixels thick, in blue, it's a tad easier to use FillRect than to first create a pen, select it into the DC, draw your line, select the pen back out, and then decide how to manage the lifetime of that pen (when do you delete it?). If the rest of the drawing is a bunch of dashed yellow lines with round endcaps, not having to keep switching state can make the code simpler.
When you set a Shape's BorderWidth > 1 in VB6, the Style is forced to be a plain line.
I'd like a DASHED (or Dotted) border with a thicker (say borderwidth=3) size.
any way to do that without drawing it manually?
Unfortunately not, under the hood a GDI Pen is used to draw the shape and the 1 twip limit is imposed there.
PS_DASH
The pen is dashed. This style is valid only when the pen width
is one or less in device units.
This of course also means you cannot use the GDI API to do it directly for you either.
Perhaps draw a series of lines offset by 1 twip.
I'm developing a custom control. One of the requirements is to draw lines. Although this works, I noticed that my 1 pixel wide lines do not really look like 1 pixel wide lines - I know, they're not really pixels but you know what I mean. They look more like two or three pixels wide. This becomes very apparent when I draw a dashed line with a 1 pixel dash and a 2 pixel gap. The 1 pixel dashes actually look like tiny lines in stead of dots.
I've read the Cocoa Drawing documentation and although Apple mentions the setLineWidth method, changing the line width to values smaller than 1.0 will only make the line look more vague and not thinner.
So, I suspect there's something else influencing the way my lines look.
Any ideas?
Bezier paths are drawn centered on their path, so if you draw a 1 pixel wide path along the X-coordinate, the line actually draws along Y-coordinates { -0.5, 0.5 } The solution is usually to offset the coordinate by 0.5 so that the line is not drawn in the sub pixel boundaries. You should be able to shift your bounding box by 0.5 to get sharper drawing behavior.
Francis McGrew already gave the right answer, but since I did a presentation on this once, I thought I'd add some pictures.
The problem here is that coordinates in Quartz lie at the intersections between pixels. This is fine when filling a rectangle, because every pixel that lies inside the coordinates gets filled. But lines are technically (mathematically!) invisible. To draw them, Quartz has to actually draw a rectangle with the given line width. This rectangle is centered over the coordinates:
So when you ask Quartz to stroke a rectangle with integral coordinates, it has the problem that it can only draw whole pixels. But here you see that we have half pixels. So what it does is it averages the color. For a 50% black (the line color) and 50% white (the background) line, it simply draws each pixel in grey:
This is where your washed-out drawings come from. The fix is now obvious: Don't draw between pixels, and you achieve that by moving your points by half a pixel, so your coordinate is centered over the desired pixel:
Now of course just offsetting may not be what you wanted. Because if you compare the filled variant to the stroked one, the stroke is one pixel larger towards the lower right. If you're e.g. clipping to the rectangle, this will cut off the lower right:
Since people usually expect the rectangle to stroke inside the specified rectangle, what you usually do is that you offset by 0.5 towards the center, so the lower right effectively moves up one pixel. Alternately, many drawing apps offset by 0.5 away from the center, to avoid overlap between the border and the fill (which can look odd when you're drawing with transparency).
Note that this only holds true for 1x screens. 2x Retina screens actually exhibit this problem differently, because each of the pixels below is actually drawn by 4 Retina pixels, which means they can actually draw the half-pixels. However, you still have the same problem if you want a sharp 0.5pt line. Also, since Apple may in the future introduce other Retina screens where e.g. every pixel is made up of 9 Retina pixels (3x), or whatever, you should really not rely on this. Instead, there are now API calls to convert rectangles to "backing aligned", which does this for you, no matter whether you're running 1x, 2x, or a fictitious 3x.
PS - Since I went to the hassle of writing this all up, I've put this up on my web site: http://orangejuiceliberationfront.com/are-your-rectangles-blurry-pale-and-have-rounded-corners/ where I'll update and revise this description and add more images.
The answer is (buried) in the Apple Docs:
"To avoid antialiasing when you draw a one-point-wide horizontal or vertical line, if the line is an odd number of pixels in width, you must offset the position by 0.5 points to either side of a whole-numbered position"
Hidden in Drawing and Printing Guide for iOS: iOS Drawing Concepts, though nothing that specific to be found in the current, standard (OS X) Cocoa Drawing Guide..
As for the effects of invoking setDefaultLineWidth: the docs also state that:
"A width of 0 is interpreted as the thinnest line that can be rendered on a particular device. The actual rendered line width may vary from the specified width by as much as 2 device pixels, depending on the position of the line with respect to the pixel grid and the current anti-aliasing settings. The width of the line may also be affected by scaling factors specified in the current transformation matrix of the active graphics context."
I found some info suggesting that this is caused by anti aliasing. Turning anti aliasing off temporarily is easy:
[[NSGraphicsContext currentContext] setShouldAntialias: NO];
This gives a crisp, 1 pixel line. After drawing just switch it on again.
I tried the solution suggested by Francis McGrew by offsetting the x coordinate with 0.5, however that did not make any difference to the appearance of my line.
EDIT:
To be more specific, I changed x and y coordinates individually and together with an offset of 0.5.
EDIT 2:
I must have done something wrong, as changing the coordinates with an offset of 0.5 actually does work. The end result is better than the one obtained by switching off the anti aliasing so I'll make Francis MsGrew's answer the accepted answer.
I am searching for an article or tutorial that explains how one can draw primitive shapes (mainly simple lines) with a (neon) glow effect on them in the graphical output of a computer program. I do not want to do some sophisticated stuff like for example in modern first pirson shooters or alike. I am more in a search for a simple solution, like the lines in that picture: http://tjl.co/blog/wp-content/uploads/2009/05/NeonStripes.jpg -- but of course drawn by a computer program in my case.
The whole thing should run on a modern smart phone, so the hardware is a bit limited.
I do know a bit about OpenGL, but not too much, so unfortunately I am a bit lost here. Did some research on Google ("glow effect algoritm" and similar), but found either highly complex stuff for 3D games, or tutorials for Photoshop & co.
So what I would really need is an in-depth article on that subject, but not on a very advanced level. I hope thats even possible... I have just started with OpenGL, did some minor graphics programming in the past, but I am a long-year programmer now, so I would understand technical papers in general.
Does anyone of you know of such an article/paper/tutorial/anything?
Thanks in advance for all good advices!
Cheers!
Matthias
Its jus a bunch of lines with different brightness/transperency. Basically, if you want a glow effect for 1px line, in a size of 20 pixels, then you draw 41 lines with width of 1 px. The middle line is with your base colour, other lines get colours that gradiently go from base color to 100% transperency (like in your example) or darkest colour variant (if you have black background, no transparency).
That is it. :)
This isn't something I've ever done, but looking at your example, the basic approach I'd use to try and recreate it would be...
Start with an algorithm for drawing a filled shape large enough to include the original shape and the glow. For example, a rectangle becomes a slightly larger rectangle, but with rounded corners. An infinitessimally-wide line becomes a thickened line with semi-circular caps. Subtract out the original shape (and fill the pixels for that normally).
For each pixel in the glow, the colour depends on the shortest distance to any part of the original shape. This normally reduces to the distance to the nearest point on a line (e.g. one edge of a rectangle).
The distance is translated to a colour value using probably Hue-Saturation-Value or a similar colour scheme, as well as reducing alpha (increasing transparency). For neon glows, you probably want constant hue, decreasing brightness, maybe increasing saturation, and decreasing alpha.
Translate the HSV/whatever colour value to RGB for output. See this question.
EDIT - I should probably have said HSL rather than HSV - in HSL, if L is at it's maximum value, the resulting colour is always white. For HSV, that's only true if saturation is also at zero. See http://en.wikipedia.org/wiki/HSL_and_HSV
The real trick is that even on a phone these days, I'd guess you probably should use hardware (shaders) for this - sorry, I don't know how that's done.
The "painters algorithm" overlaying of gradually smaller shapes that others have described here is also a possibility, but (1) possibly slower, depending on implementation issues, and (2) you may need to draw to an off-screen buffer, with some special handling for the alpha channel, then blit back to the screen to handle the transparency correctly - if you need transparency, that is.
EDIT - Silly me. An alternative approach is to apply a blur to your original shape (in greyscale), but instead of writing out the blurred pixels directly, apply the colour-transformation to each blurred pixel value.
A blur is basically a weighted moving average. Technically, a finite impulse response filter is implemented using a convolution, but the maths for that is a tad awkward and if you just want "a blur" of about the right size, draw a grayscale circle of pixels as your "weights" image.
The blur in this case basically replaces the distance-from-shape calculation.
_____________________
| |
----|---------------------|-----> line
|_____________________|
gradient block
Break up your line into small non-overlapping blocks. Use whatever graphics primitive you have to draw a tilted rectangular gradient: the center is at 100% and the outer edge is at 0%.
Don't draw it on the image yet; you want to blend it with the image. Using regular transparency will just make it look like a random pipe or pole or something (unless you draw a white line, and your background is dark).
Here are two choices of blending mode:
color dodge: [blended pixel value] = (1-[overlay's pixel value]) / [bottom pixel value]
linear dodge: [blended pixel value] = max([overlay's pixel value]+[bottom pixel value], 1)
Then draw the line above the glow.
If you want to draw a curved "neon" line, simply draw it as a sequence of superimposed "neon dots" where each "neon dot" is a small circular image with transparency going from 0% at the origin to 100% at the edge of the circle.
I have a Win32 GUI application that uses GDI havily. It needs to draw text over a bitmap at specified coordinates and later erase it and substitute with the original bitmap.
I proceed as follows:
select font (GetStockObject( DEFAULT_GUI_FONT)), brush, other stuff into the device context
call GetTextExtentPoint32() to compute the size of the text
now having the text starting point I can compute the expected text rectangle and store it
call TextOut() for the same device context with the same starting point and the same text
and later restore the bitmap for the store rectangle.
It works fine when ClearType antialiasing is off. But with ClearType on the size returned by GetTextExtentPoint32() is slightly smaller than the size actually occupied by the text when TextOut() is called. So when I later restore the original bitmap some small stripes of the text remain in place and I have artifacts.
Is there any cure to this without disabling ClearType?
You could also try DrawText with DT_CALCRECT to compute the string size. Maybe it works better.
Also you can then write the string with DrawText inside a rectangle with the sizes equal to the one you get with DT_CALCRECT and it will clip the text in case it is a bit larger.