What is going on here?
pango_font_description_set_family(font.get(),"Serif");
pango_font_description_set_absolute_size(font.get(), 16 * PANGO_SCALE);
int w;
int h;
pango_layout_set_font_description (layout.get(),font.get());
pango_layout_set_width(layout.get(),960*PANGO_SCALE/4);
pango_layout_set_text(layout.get(),"Lorem ipsum dolor sit amet, consctetur adipiscing elit. Nulla vel enim est. Phasellus quis lacinia urna.", -1);
//I want right alignment for this layout
pango_layout_set_alignment(layout.get(),PANGO_ALIGN_RIGHT);
pango_layout_get_pixel_size(layout.get(),&w,&h);
cairo_set_source_rgb (cr.get(), 0.0, 0.0, 0.0);
//Move draw point to the right edge, taking the size of the layout into account
cairo_move_to (cr.get(), 960 - 16 - w,16);
pango_cairo_show_layout(cr.get(), layout.get());
//Draw test rectangle around the text.
cairo_rectangle(cr.get(),960-16-w,16,w,h);
cairo_stroke(cr.get());
cairo_surface_write_to_png(surf.get(),"test.png");
As seen in the picture, the text is positioned to the right of the desired position. How can I locate the text correctly? cairo_move_to is clearly not the right choice here, or is there any known bugs affecting the behaviour of PANGO_ALIGN_CENTER, and PANGO_ALIGN_RIGHT. PANGO_ALIGN_LEFT works as expected.
The problem is that pango may not render the text at its correct offset. To fix the problem, use pango_layout_get_pixel_extents and query the "ink" rectangle. Notice that it may have a non-zero x and y component. Subtracting these offsets from the desired location results in the correct result.
Related
I have an image with very low intensity contrast from its background.
The first line between the two arrows is the line with low contrast.
The second line is ok. Please see in the below image.
The original image is as shown below.
I used the following method to enhance the contrast in Gray scale.
First the image is changed to Gray color and used the following method.
cv::Mat temp;
for (int i = 0; i < 1; i++) // number of iterations has to be adjusted
{
cv::threshold(image, temp, 0, 255, CV_THRESH_BINARY| CV_THRESH_OTSU);//
cv::bitwise_and(image, temp, image);
cv::normalize(image, image, 0, 255, cv::NORM_MINMAX, -1, temp);
}
I have image with a little bit higher in contrast in Gray scale, but is there any method better than this in Gray scale or Color?
I would look at histogram equalization, that might serve your needs. Basic (global) equalization or even adaptive can yield great results. Parameters will likely need to be tuned for the adaptive method (using the one from the docs example for now).
I get (global equalization - left; adaptive equalization - right):
Once the equalization is done, you might have better luck with thresholding (though your example is very low contrast):
From there, you can use standard contour/shape matching etc to try to find the location of your 1st black line.
Gotten from
import cv2
import matplotlib.pyplot as plt
import numpy as np
raw_img_load = cv2.imread('H1o8X.png')
imgr = cv2.cvtColor(raw_img_load,cv2.COLOR_BGR2GRAY)
clahe = cv2.createCLAHE(clipLimit=30.0, tileGridSize=(8,8))
imgray_ad = clahe.apply(imgr)#adaptive
imgray = cv2.equalizeHist(imgr)#global
res = np.hstack((imgray,imgray_ad))#so we can plot together
plt.imshow(res,cmap='gray')
plt.show()
ret,thresh = cv2.threshold(imgray_ad,150,255,type=cv2.THRESH_BINARY+cv2.THRESH_OTSU)
plt.imshow(thresh,cmap='gray')
plt.show()
EDIT: based on #Doleron's answer, for this particular problem I would recommend using fastNlMeansDenoising (applied before any histogram equalization). Note, however, that it can be a slow function for high-res images/time-sensitive image processing.
The #Antoine Zambelli answer is awsome and it is the correct one. Anyway, I dug some here and and tried to remove the noise previously with fastNlMeansDenoising to improve the final result:
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include "opencv2/photo.hpp"
using namespace cv;
using cv::CLAHE;
int main(int argc, char** argv) {
Mat srcImage = imread("H1o8X.png", CV_LOAD_IMAGE_GRAYSCALE);
imshow("src", srcImage);
Mat denoised;
fastNlMeansDenoising(srcImage, denoised, 10);
Mat image = denoised;
Ptr<CLAHE> clahe = createCLAHE();
clahe->setClipLimit(30.0);
clahe->setTilesGridSize(Size(8, 8));
Mat imgray_ad;
clahe->apply(image, imgray_ad);
Mat imgray;
cv::equalizeHist(image, imgray);
imshow("imgray_ad", imgray_ad);
imshow("imgray", imgray);
Mat thresh;
threshold(imgray_ad, thresh, 150, 255, THRESH_BINARY | THRESH_OTSU);
imshow("thresh", thresh);
Mat result;
Mat kernel = Mat::ones(8, 8, CV_8UC1);
erode(thresh, result, kernel);
imshow("result", result);
waitKey();
return 0;
}
I'm making a small project where i have to detect points scored from a given image of paper target. Something similar to TargetScan app for iPhone.
I'm using openCV for processing image and basically i have two parts for this, one is to detect circles from a target(which works pretty good with Hough Circle Transform) and the second part is to detect shots. I need some ideas how to detect those shots from a given image. Here is an example image with circle detection ON (green line for circles detected and red point for center). What algorithms from openCV can be used to detect those shoots?
Here is another example image
Algo:
create/clear mask for image
binarize image (to black and white by some intensity threshold)
process all pixels
count how many pixels of the same color are there in x,y directions
call it wx,wy
detect circle,shot and mid section
circles are thin so wx or wy should be less then thin threshold and the other one should be bigger. Shots are big so booth wx and wy must be in shot diameter range. Mid section is black and booth wx,wy above all thresholds (you can compute avg point here). Store this info into mask
recolor image with mask info
compute center and radiuses of the circles from found points
center is avg point of mid section area, now process all the green points and compute radius for it. Do histogram for all found radiuses and sort it by count descending. The count should be consistent with 2*PI*r if not ignore such points.
group shot pixels together
so segmentate or flood fill recolor each hit to avoid multiple accounting of single shot
I coded the #1..#6 for fun in C++ here is the code:
picture pic0,pic1,pic2;
// pic0 - source
// pic1 - output
// pic2 - mask
int x,y,i,n,wx,wy;
int r0=3; // thin curve wide treshod [pixels]
int r1a=15; // shot diameter min treshod [pixels]
int r1b=30; // shot diameter max treshod [pixels]
int x0,y0; // avg point == center
// init output as source image but in grayscale intensity only
pic1=pic0;
pic1.rgb2i();
// init mask (size of source image)
pic2.resize(pic0.xs,pic0.ys);
pic2.clear(0);
// binarize image and convert back to RGB
for (y=r0;y<pic1.ys-r0-1;y++)
for (x=r0;x<pic1.xs-r0-1;x++)
if (pic1.p[y][x].dd<=500) // Black/White treshold <0,765>
pic1.p[y][x].dd=0x00000000; // Black in RGB
else pic1.p[y][x].dd=0x00FFFFFF; // White in RGB
// process pixels
x0=0; y0=0; n=0;
for (y=r1b;y<pic1.ys-r1b-1;y++)
for (x=r1b;x<pic1.xs-r1b-1;x++)
{
wy=1; // count the same color pixels in column
for (i=1;i<=r1b;i++) if (pic1.p[y-i][x].dd==pic1.p[y][x].dd) wy++; else break;
for (i=1;i<=r1b;i++) if (pic1.p[y+i][x].dd==pic1.p[y][x].dd) wy++; else break;
wx=1; // count the same color pixels in line
for (i=1;i<=r1b;i++) if (pic1.p[y][x-i].dd==pic1.p[y][x].dd) wx++; else break;
for (i=1;i<=r1b;i++) if (pic1.p[y][x+i].dd==pic1.p[y][x].dd) wx++; else break;
if ((wx<r0)||(wy<r0)) // if thin
if ((wx>=r0)||(wy>=r0)) // but still line
{
pic2.p[y][x].dd=1; // thin line
}
if (pic1.p[y][x].dd==0) // black
if ((wx>=r0)&&(wy>=r0)) // and thick in both axises
{
pic2.p[y][x].dd=2; // middle section
x0+=x; y0+=y; n++;
}
if (pic1.p[y][x].dd) // white (background color)
if ((wx>r1a)&&(wy>r1a)) // size in range of shot
if ((wx<r1b)&&(wy<r1b))
{
pic2.p[y][x].dd=3; // shot
}
}
if (n) { x0/=n; y0/=n; }
// add mask data (recolor) to output image
// if (0)
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;x++)
{
if (pic2.p[y][x].dd==1) pic1.p[y][x].dd=0x0000FF00; // green thin line
if (pic2.p[y][x].dd==2) pic1.p[y][x].dd=0x000000FF; // blue midle section
if (pic2.p[y][x].dd==3) pic1.p[y][x].dd=0x00FF0000; // red shots
}
// Center cross
i=25;
pic1.bmp->Canvas->Pen->Color=0x0000FF;
pic1.bmp->Canvas->MoveTo(x0-i,y0);
pic1.bmp->Canvas->LineTo(x0+i,y0);
pic1.bmp->Canvas->MoveTo(x0,y0-i);
pic1.bmp->Canvas->LineTo(x0,y0+i);
I use my own picture class for images so some members are:
xs,ys size of image in pixels
p[y][x].dd is pixel at (x,y) position as 32 bit integer type
clear(color) - clears entire image
resize(xs,ys) - resizes image to new resolution
This is the recolored result
green - thin circles
blue mid section
red cross (center of circles)
red - shots
as you can see it needs the further processing from bullets #7,#8 and also your image has no shot outside mid section so may be it will need some tweak for shot detection outside mid section too
[edit1] radiuses
// create & clear radius histogram
n=xs; if (n<ys) n=ys;
int *hist=new int[n];
for (i=0;i<n;i++) hist[i]=0;
// compute histogram
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
if (pic2.p[y][x].dd==1) // thin pixels
{
i=sqrt(((x-x0)*(x-x0))+((y-y0)*(y-y0)));
hist[i]++;
}
// merge neigbour radiuses
for (i=0;i<n;i++)
if (hist[i])
{
for (x=i;x<n;x++) if (!hist[x]) break;
for (wx=0,y=i;y<x;y++) { wx+=hist[y]; hist[y]=0; }
hist[(i+x-1)>>1]=wx; i=x-1;
}
// draw the valid circles
pic1.bmp->Canvas->Pen->Color=0xFF00FF; // magenta
pic1.bmp->Canvas->Pen->Width=r0;
pic1.bmp->Canvas->Brush->Style=bsClear;
for (i=0;i<n;i++)
if (hist[i])
{
float a=float(hist[i])/(2.0*M_PI*float(i));
if ((a>=0.3)&&(a<=2.1))
pic1.bmp->Canvas->Ellipse(x0-i,y0-i,x0+i,y0+i);
}
pic1.bmp->Canvas->Brush->Style=bsSolid;
pic1.bmp->Canvas->Pen->Width=1;
delete[] hist;
detected circles are in Magenta ... pretty good I think. The mid section screw it a bit. You can compute average radius step and interpolate the missing circles ...
This is for the programing language Processing (2.0).
Say I wish to load a not square image (lets use a green circle for the example). If I load this on a black background you can visibly see the white square of the image(aka all parts of image that aren't the green circle). How would I go about efficiently removing them?
It can not think of an efficient way to do it, I will be doing it to hundreds of pictures about 25 times a second(since they will be moving).
Any help would be greatly appreciated, the more efficient the code the better.
As #user3342987 said, you can loop through the image's pixels to see if each pixel is white or not. However, it's worth noting that 255 is white (not 0, which is black). You also shouldn't hardcode the replacement color, as they suggested -- what if the image is moving over a striped background? The best approach is to change all the white pixels into transparent pixels using the image's alpha channel. Also, since you mentioned you would be doing it "about 25 times a second", you shouldn't be doing these checks more than once-- it will be the same every time and would be wasteful. Instead, do it when the images are first loaded, something like this (untested):
PImage[] images;
void setup(){
size(400,400);
images = new PImage[10];
for(int i = 0; i < images.length; i++){
// example filenames
PImage img = loadImage("img" + i + ".jpg");
img.beginDraw();
img.loadPixels();
for(int p = 0; p < img.pixels.length; p++){
//color(255,255,255) is white
if(img.pixels[p] == color(255,255,255)){
img.pixels[p] = color(0,0); // set it to transparent (first number is meaningless)
}
}
img.updatePixels();
img.endDraw();
images[i] = img;
}
}
void draw(){
//draw the images as normal, the white pixels are now transparent
}
So, this will lead to no lag during draw() because you edited out the white pixels in setup(). Whatever you're drawing the images on top of will show through.
It's also worth mentioning that some image filetypes have an alpha channel built in (e.g., the PNG format), so you could also change the white pixels to transparent in some image editor and use those edited files for your sketch. Then your sketch wouldn't have to edit them every time it starts up.
Pixels are stored in the Pixels[] array, you can use a for loop to check to see if the value is 0 (aka white). If it is white load it as the black background.
I have very minimal programming experience.
I would like to write a program that will generate and save as a gif image every possible image that can be created using only black and white pixels in 640 by 360 px dimensions.
In other words, each pixel can be either black or white. 640 x 360 = 230,400 pixels. So I believe total of 460,800 images are possible to be generated (230,400 x 2 for black/white).
I would like a program to do this automatically.
Please help!
First to answer your questions. Yes there will be writings on "some" pictures. Actually ever text written by human which fits in 640x360 pixels will show up. Also every other text (text not yet written or text that never will be written). Also you will see pictures of every human which is, was or will be alive. See Infinite Monkey Theorem for further information.
The code to create your wanted gif is fairly easy. I used Java for this. Note that you need an extra class: AnimatedGifEncoder. The Code is not memory-bound because the AanimatedGifEncoder will write each image to disk as soon it is computed. But make sure that you have enough disk space available.
import java.awt.Color;
import java.awt.image.BufferedImage;
public class BigPicture {
private final int width;
private final int height;
private final int WHITE = Color.WHITE.getRGB();
private final int BLACK = Color.BLACK.getRGB();
public BigPicture(int width, int height) {
this.width = width;
this.height = height;
}
public void process(String outFile) {
AnimatedGifEncoder gif = new AnimatedGifEncoder();
gif.setSize(width, height);
gif.setTransparent(null); // no transparency
gif.setRepeat(-1); // play only once
gif.setDelay(0); // 0 ms delay between images,
// 'cause ain't nobody got time for that!
gif.start(outFile);
BufferedImage bufferedImage = new BufferedImage(width, height, BufferedImage.TYPE_BYTE_BINARY);
// set the image to all white
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
bufferedImage.setRGB(x, y, WHITE);
}
}
// add white image
gif.addFrame(bufferedImage);
// add all other combinations
while (increase(bufferedImage)) {
gif.addFrame(bufferedImage);
}
gif.finish();
}
/**
* #param bufferedImage
* the image to increase
* #return false if last pixel set to black => image is complete black
*/
private boolean increase(BufferedImage bufferedImage) {
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
if (bufferedImage.getRGB(x, y) == WHITE) {
bufferedImage.setRGB(x, y, BLACK);
return true;
}
bufferedImage.setRGB(x, y, WHITE);
}
}
return false;
}
public static void main(String[] args) {
new BigPicture(640, 360).process("C:\\temp\\bigpicture.gif");
System.out.println("finished.");
}
}
Please be aware that this will take some time. So don't bother waiting and enjoy your life instead! ;)
EDIT: Since my solution is a bit unclear i will explain the algorithm.
I have defined a method called increase. This method takes the BufferedImage and changes the bit pattern of the image so that the next bit pattern appears. The method is just a bit addition. The method will return false if the image encounters the last bit pattern (all pixels are set to black).
As long as it is possible to increase the bit pattern (i.e. increase() returns true) we will save the image as new frame and increase the image again.
How the increase() method works: The method runs over the image first in x-direction then in y-direction. I assume that white pixels are 0 and black pixels are 1. So, we want to take the bit pattern of the image and add 1. We inspect the first pixel: if it is white (0) we can add 1 without an overflow so we turn the pixel to black (0 + 1 = 1 => black pixel). After that we return from the method because we want to increase only one position. It returns true because an increase was possible. If we encounter a black pixel we have an overflow (1 + 1 = 2 or in binary 10). So we have to set the current pixel to white and add the 1 to the next pixel. This will continue until we find the first white pixel.
example:
first we create a print method: this method prints the image as binary number. Attention the number is reversed and the most significant bit is the bit on the right side.
public void print(BufferedImage bufferedImage) {
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
if (bufferedImage.getRGB(x, y) == WHITE) {
System.out.print(0); // white pixel
} else {
System.out.print(1); // black pixel
}
}
}
System.out.println();
}
now we modify our main-while loop:
print(bufferedImage); // this one prints the empty image
while (increase(bufferedImage)) {
print(bufferedImage);
}
and now set some short example to test:
new BigPicture(1, 5).process("C:\\temp\\bigpicture.gif");
and finally the output:
00000 // 0 this is the first print before the loop -> "white image"
10000 // 1 the first white pixel is set to black
01000 // 2 the first overflow, so the second pixel is set to black "2"
11000 // 3
00100 // 4
10100 // 5
01100
11100
00010 // 8
10010
01010
11010
00110
10110
01110
11110
00001 // 16
10001
01001
11001
00101
10101
01101
11101
00011
10011
01011
11011
00111
10111
01111
11111 // 31 == 2^5 - 1
finished.
In other words, each pixel can be either black or white. 640 x 360 =
230,400 pixels. So I believe total of 460,800 images are possible to
be generated (230,400 x 2 for black/white).
There is a little flaw in your belief. You are right about the number of pixels: 230,400. Unfortunately, this means there are not 2 * 230,400, but 2 ^ 230,400 possible pictures, which is a number with more than 60,000 digits (longer than the allowed answer size, I am afraid). For comparison a particular number with 45 digits signifies the diameter of the observable universe in centimeters (roughly the width of a pinkie).
In order to understand why your computation of the number of pictures is wrong consider this example: if your pictures contained only three pixels, you could have 8 different pictures (2 ^ 3), rather than 6 (2 * 3). Here are all of them: BBB, BBW, BWB, BWW, WBB, WBW, WWB, WWW. Adding another pixel doubles the size of possible pictures because you can have it white for all the 3-pixel cases, or black for all the 3-pixel cases. Doubling 1 (which is the amount of pictures you can have with 0 pixels) 230,400 times gives you 2 ^ 230,400.
It's great that there is a bounty for the question, but it is rather distracting and counter-productive if it was just as an April's Fool joke.
I'm going to go ahead and pinch some code from a related question, just for fun.
from itertools import product
for matrix in product([0, 1], repeat=(math,pow(2,230400)):
# render and save your .gif
As all the comments have already stated, good luck!
On a more serious note, if you didn't want to be absolutely sure that you had all permutations, you could generate a random 640x360 matrix and store it as an image.
Perform this action say 100k times, and you'll have at least an interesting set of pictures to look at, but it's unfeasible to get every possible permutation.
You could then delete all identical files to reduce the set to just the unique images.
I created two rulers - one vertical and one horizontal:
Now in the vertical ruler, is 'size' of the text visually larger(aprox. 5-6 pixels longer).
Why?
Relevant code:
WM_CREATE:
LOGFONT Lf = {0};
Lf.lfHeight = 12;
lstrcpyW(Lf.lfFaceName, L"Arial");
if (!g_pGRI->bHorizontal)
{
Lf.lfEscapement = 900; // <----For vertical ruler!
}
g_pGRI->hfRuler = CreateFontIndirectW(&Lf);
SelectFont(g_pGRI->hdRuler, g_pGRI->hfRuler);
WM_PAINT:
SetTextColor(g_pGRI->hdRuler, g_pGRI->cBorder);
SetBkColor(g_pGRI->hdRuler, g_pGRI->cBackground);
SetTextAlign(g_pGRI->hdRuler, TA_CENTER);
#define INCREMENT 10
WCHAR wText[16] = {0};
if (g_pGRI->bHorizontal)
{
INT ixTicks = RECTWIDTH(g_pGRI->rRuler) / INCREMENT;
for (INT ix = 0; ix < ixTicks + 1; ix++)
{
MoveToEx(g_pGRI->hdRuler, INCREMENT * ix, 0, NULL);
if (ix % INCREMENT == 0)
{
//This is major tick.
LineTo(g_pGRI->hdRuler, INCREMENT * ix, g_pGRI->lMajor);
wsprintfW(wText, L"%d", INCREMENT * ix);
TextOutW(g_pGRI->hdRuler, INCREMENT * ix + 1, g_pGRI->lMajor + 1, wText, CHARACTERCOUNT(wText));
}
else
{
//This is minor tick.
LineTo(g_pGRI->hdRuler, INCREMENT * ix, g_pGRI->lMinor);
}
}
}
else
{
INT iyTicks = RECTHEIGHT(g_pGRI->rRuler) / INCREMENT;
for (INT iy = 0; iy < iyTicks + 1; iy++)
{
MoveToEx(g_pGRI->hdRuler, 0, INCREMENT * iy, NULL);
if (iy % INCREMENT == 0)
{
//This is major tick.
LineTo(g_pGRI->hdRuler, g_pGRI->lMajor, INCREMENT * iy);
wsprintfW(wText, L"%d", INCREMENT * iy);
TextOutW(g_pGRI->hdRuler, g_pGRI->lMajor + 1, INCREMENT * iy + 1, wText, CHARACTERCOUNT(wText));
}
else
{
//This is minor tick.
LineTo(g_pGRI->hdRuler, g_pGRI->lMinor, INCREMENT * iy);
}
}
}
}
Background
There are several different schemes for rasterizing text in a legible way when the text is small relative to the size of a pixel. For example, if the stroke width is supposed to be 1.25 pixels wide, you either have to round it off to a whole number of pixels, use antialiasing, or use subpixel rendering (like ClearType). Rounding is usually controlled by "hints" built into the font by the font designer.
Hinting is the main reason why text width doesn't always scale exactly with the text height. For example, if, because of rounding, the left hump of a lowercase m is a pixel wider than the right one, a hint might tell the renderer to round the width up to make the letter symmetric. The result is that the character is a tad wider relative to its height than the ideal character.
This issue
What's likely happening here is that when GDI renders the string horizontally, each subsequent character may start at a fractional position, which is simulated by antialiasing or subpixel (ClearType) rendering. But, when rendering vertically, it appears that each subsequent character's starting position is rounded up to the next whole pixel, which tends to make the vertical text a couple pixels "longer" than its horizontal counterpart. Effectively, the kerning is always rounded up to the next whole pixel.
It's likely that more effort was put into the common case of horizontal text rendering, making it easier to read (and possibly faster to render). The general case of rendering at any other angle may have been implemented in a simpler manner, working glyph-by-glyph instead of with the entire string.
Things to Try
If you want them to look that same, you'll probably have to make a small compromise in the visual quality of the horizontal labels. Here are a few things I can think of to try:
Render the labels with regular antialiasing instead of ClearType subpixel rendering. (You can do this by setting the lfQuality field in the LOGFONT.) You would then draw the horizontal labels in the normal manner. For the vertical labels, draw them to an offscreen buffer horizontally, rotate it, and then blit the buffer to the screen. This gives you labels that look identical. The reason I suggest regular antialiasing is that it's invariant to the rotation. ClearType rendering had an inherent orientation and thus cannot be rotated without creating fringing. I've used this approach for graph labels with good results.
Render the horizontal labels character by character, rounding the starting point up to the next whole pixel. This should make the horizontal labels look like the vertical ones. Typographically, they won't look as good, but for small labels like this, it's probably less distracting than having the horizontal and vertical labels visually mismatched.
Another answer suggested rendering the horizontal labels with a very small, but non-zero, escapement and orientation, forcing those to go through the same rendering pipeline as the vertical labels. This may be the easiest solution for short labels like yours. If you had to handle longer strings of text, I'd suggest one of the first two methods.
When using lfEscapement, you will often get strange behaviour as it renders text using a fairly different pipeline.
A trick would be to have lfEscapement set for both. One with 900, and one with a very low value (such as 1 or even 10. Once you have both rendering with escapement, you should be good.
If you're still having issues with smoothing, try doing something like this:
BOOL bSmooth;
//Get previous smooth value.
SystemParametersInfo(SPI_GETFONTSMOOTHING, 0, &bSmooth, 0);
//Set no smoothing.
SystemParametersInfo(SPI_SETFONTSMOOTHING, 0, NULL, 0);
//Draw text.
//Return smoothing.
SystemParametersInfo(SPI_SETFONTSMOOTHING, bSmooth, NULL, 0);