Why does direct2d look pixelated and inaccurate? - winapi

I created a layered window and added some rounded rectangles, ellipses and text to make a custom window.
This is in short how I wrote it:
D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED, &d2dFactory);
d2dFactory->CreateHwndRenderTarget
(
RenderTargetProperties(),
HwndRenderTargetProperties(hwnd, SizeU(clientRect.right, clientRect.bottom)),
&pRenderTarget
);
...
DWriteCreateFactory(DWRITE_FACTORY_TYPE_SHARED, __uuidof(dWriteFactory), reinterpret_cast<IUnknown**>(&dWriteFactory));
dWriteFactory->CreateTextFormat
(
L"Calibri",
NULL,
DWRITE_FONT_WEIGHT_NORMAL,
DWRITE_FONT_STYLE_NORMAL,
DWRITE_FONT_STRETCH_NORMAL,
14.0f,
L"en-us",
&pTextFormat
);
...
D2D1_ROUNDED_RECT roundedRect = RoundedRect
(
RectF
(
clientRect.left,
clientRect.top,
clientRect.right,
clientRect.bottom
),
20, 20
);
...
pRenderTarget->BeginDraw();
...
pRenderTarget->FillRoundedRectangle
(
roundedRect,
SCB_DARK_BLUE
);
pRenderTarget->DrawTextA
(
title,
titleSize,
pTextFormat,
RectF(185, 2, 255, 2),
SCB_WHITE
);
pRenderTarget->FillEllipse
(
Ellipse
(
Point2F(20, 10),
6, 6
),
SCB_RED
);
I followed Microsoft's documentation.
The window looks like this
Its size in pixels is 300 by 400.
The text, corners and ellipses look pixelated and inaccurate.
What am I missing in my program? Thanks in advance.

Is your program set to be high DPI aware? Note that I am blind and therefore can't see your screenshot but that is the first thing I think of from your description.
If using visual studio go to project->properties then Manifest tool -> Input and Output and check the "DPI awareness" setting. It should be either "High DPI Aware" or "Per Monitor High DPI Aware".

Related

How to design a screen resolution compatible UI in Flutter?

The design of an application I developed with Flutter is broken on devices with different screen sizes (tablet vs. phone). Cards and containers overlap and vary in size. What is your suggestion?
I really suggest you to give a look at the LayoutBuilder class that's been created exactly to solve your problem. Give a look at the doc for every info; here's a simple usage:
LayoutBuilder(
builder: (context, constraints) {
if (constraints.maxWidth < YOUR_SIZE)
return Column();
else
return GridView();
},
),
Very intuitively: if the width of the device is lower than YOUR_SIZE, the screen is not so "wide" and a column fits well. Otherwise you could place your widgets in a grid with N columns probably.
Official video about LayoutBuilder on YouTube.
Use widgets composition instead of functions that return Widget. Functions are NOT optimized, you can't const-construct the widget and they get rebuilt every time!
To make the size of any widget proportional with any device, you need to get the device width and height before setting widget size.
For instance, to create a container of size that is responsive to any device height and width and also scale when rotated, you need to use the MediaQuery to get the device height and width to scale you widget size as follows:
....
#override
Widget build(BuildContext context){
double _height = MediaQuery.of(context).size.height;
double _width = MediaQuery.of(context).size.width;
return Container(
height: _width > _height ? _width / 10 : _height / 10,
width: _width > _height? ? _height / 10 : _widht / 10,
child: AnyWidget(),
);
}
...

how to handle different screen sizes in react native?

I am developing an application on react-native. I have made a UI which works fine on iPhone 6 but not working fine on iPhone 5 or lower versions.
How should I fix this ?
You need to think about proportion when building your UI.
1, Use percentage(%) for width and aspectRatio for height, or vice versa.
container: {
width: "100%",
aspectRatio: 10 / 3, //height will be "30%" of your width
}
2, Use flex for the jobs percentage can't do. For example, if you have arbitrary size of items in a list and you want them to share equal sizes. Assign each of them with flex: 1
3, Use rem from EStyleSheet instead of pixels. rem is a scale fator. For example, if your rem is 2 and your “11rem” will become “11*2” = “22”. If we make rem proportion to the screen sizes, your UI will scale with any screen sizes.
//we define rem equals to the entireScreenWidth / 380
const entireScreenWidth = Dimensions.get('window').width;
EStyleSheet.build({$rem: entireScreenWidth / 380});
//how to use rem
container: {
width: "100%",
aspectRatio: 10 / 3, //height will be "30%"
padding: "8rem", //it'll scale depend on the screen sizes.
}
4, Use scrollView for contents that could potentially scale out of the boxes. For example, a TextView
5, Every time you think about using pixels, consider use rem in method 3.
For a detailed explanation, you can read the article here. 7 Tips to Develop React Native UIs For All Screen Sizes
Have you designed the app using fixed widths and heights? You should definitely use the capabilities of flexbox and try to avoid settings fixed sizes as much as possible. The flex property can be used to define how much space a <View /> should use releative to others, and the other properties on that page can be used to lay out elements in a flexible way that should give the desired results on a range of different screen sizes.
Sometimes, you may also need a <ScrollView />.
When you do need fixed sizes, you could use Dimensions.get('window').
You need to calculate sizes dynamically, relying on screen size.
import { Dimensions, StyleSheet } from 'react-native'
[...]
const { width, height } = Dimensions.get('window')
[...]
const styles = StyleSheet.create({
container: {
flex: 1.
flexDirection: 'column',
},
myView: {
width: width * 0.8, // 80% of screen's width
height: height * 0.2 // 20% of screen's height
}
})
If you are using TabbarIOS, remember that Dimensions.get('window') gives you the whole screen's height, this means that you'll have to take in account that the tabbar has fixed-height of 56.
So for example, when using TabbarIOS:
const WIDTH = Dimensions.get('window').width,
HEIGHT = Dimensions.get('window').height - 56
Then use WIDTH and HEIGHT as above.

Gutter widths in Susy 2

In beta 2 of Susy 2, is it possible to set gutter widths in the main grid settings like so:?
$susy: (
flow: ltr,
math: static,
output: float,
gutter-position: after,
container: auto,
container-position: center,
columns: 12,
gutters: .25,
!!!!gutter-override: 20px,!!!!
column-width: 77.5px,
global-box-sizing: border-box,
last-flow: to,
debug: (
image: hide,
color: rgba(#66f, .25),
output: background,
toggle: top right,
),
);
Because it doesn't seem to like it. I need to set explicit widths for columns and gutters for this grid and the container should be determined from those.
In Susy Next, gutters are set as a ratio (.25) to the column-width (77.5px). Given those settings, Susy can determine the gutter-width (77.5px * .25 = 19.375px).
By saying you want static math, .25 gutters, and 77.5px columns, you have already set the gutter width, and the container can already be calculated. If you like, you can use real pixel values to set your gutter ratio:
$susy: (
column-width: 77.5px,
gutters: 20px/77.5px, // same as ".258064516"
);
Gutter-override is not a global setting, and won't help you here. That setting is only for spans, when you want to override the global value. Also, I don't recommend sub-pixel settings. Pixels don't really break down, and fractional pixel declarations aren't handled the same across browsers.

Face tracking software on mac (in build camera)

I want a way to track a user looking at a screen over time.
E.g. in normal use what exact seconds of the day had the user looking at the screen.
I'm wondering what innovative ideas or pre-existing software would allow me to do this.
So for more detail the way I see it is there would be some tolerance levels e.g. distance from screen, angle of head to screen that would be considered "engaged" with the monitor. If the camera on say a mac book pro was used to track this then it would record in a text file/key value store a timestamp and boolean value for each second of the time the program in turned on.
Anyone any experience with this sort of thing?
You can find a good starting point here: http://code.google.com/p/ehci/
It's a software based on OpenCV that tracks head and detect its orientation. It's opensource.
There are facetrackers implemented (and already trained with markers), for example in OpenCV. I suggest you to first start with just tracking faces. Once you have a robust facetracker, and you can generate output telling how long a face has been looking at the screen, etc.
Later you can add improvements. Once you detect a face, you can try to recognize people analizing face pixels.
Another line is to recognize parts of the face, like mouth, eyes, nose, eyebrows...
If you can track face and parts of the face, you can try to recognize facial expression patterns, like happines, sadness, etc..
Face.com has a solution to regonize faces. So just grab the camera input and send it to their servers I guess?
I've built a face detection system to do something like this once using OpenCV, you can see the result here.
The method I used then was two seperate uses of haarTraining with the standard built in OpenCV classifiers. I used the classifier called haarcascade_frontalface_default.xml to see if a user is watching the screen and haarcascade_profileface.xml to see if the user was looking away. The following code should get you started using openCV and C++.
CvHaarClassifierCascade *cascade_face;
CvMemStorage *storage_face;
CvHaarClassifierCascade *cascade_profile;
CvMemStorage *storage_profile;
//profile face
storage_profile = cvCreateMemStorage( 0 );
cascade_profile = ( CvHaarClassifierCascade* )cvLoad( "haarcascade_profileface.xml", 0, 0, 0 );
cvHaarDetectObjects( frm, cascade_profile, storage_profile, 1.1, 2, CV_HAAR_DO_CANNY_PRUNING);
//frontal face
storage_face = cvCreateMemStorage( 0 );
cascade_face = ( CvHaarClassifierCascade* )cvLoad( "haarcascade_frontalface_default.xml", 0, 0, 0 );
cvHaarDetectObjects( frm, cascade_face, storage_face, 1.1, 2, CV_HAAR_DO_CANNY_PRUNING);
//detect profiles
CvSeq *profile = cvHaarDetectObjects(img,cascade_profile, storage_profile, 1.1,3,0,cvSize( 20, 20 ));
for( i = 0 ; i < ( profile ? profile->total : 0 ) ; i++ ) {
CvRect *r = ( CvRect* )cvGetSeqElem( profile, i );
//draw rectangle here, or do other stuff
}
//detect front
CvSeq *faces = cvHaarDetectObjects(img,cascade_face, storage_face, 1.1,3,0,cvSize( 20,20 ));
for( i = 0 ; i < ( faces ? faces->total : 0 ) ; i++ ) {
CvRect *r = ( CvRect* )cvGetSeqElem( faces, i );
//draw rectangle here, or do other stuff
}

Set SWT Check/Radio Button Foreground color in Windows

This is not a duplicate of How to set SWT button foreground color?. It's more like a follow up. I wrote follow-up questions as comments, but did not get any responses, so I thought I'd try to put it up as a question, and hopefully some expert will see it.
As is pointed in the referenced question, windows native button widgets do not support setting the foreground color (in fact, after more further research (more like experiments), it's been revealed that setForeground() works under the Classic Theme, but not others).
The answer/suggestion given in the referenced question is a good one (a.k.a providing a paint listener and drawing over the text with the correct color). I gave it a whirl but ran into a world of problems trying to decide the coordinate at which to draw the text:
It appears that - in addition to SWT attributes like alignment etc. - Windows has some rather hard-to-figure-out rule of deciding the location of the text. What makes it worse is that the location appears to be dependent on the windows theme in effect. Since I need to draw the text exactly over the natively-drawn windows text in order to override the color, this is a huge problem.
Please, can someone provide some much-needed help here? It'd be greatly appreciated!
Thank you!
On the same PaintListener you use to paint the coloured background, you have to calculate the position and draw the text. Here's how we do it here:
public void paintControl( PaintEvent event ) {
// Is the button enabled?
if ( !isEnabled() ) {
return;
}
// Get button bounds.
Button button = (Button)event.widget;
int buttonWidth = button.getSize().x;
int buttonHeight = button.getSize().y;
// Get text bounds.
int textWidth = event.gc.textExtent( getText() ).x;
int textHeight = event.gc.textExtent( getText() ).y;
// Calculate text coordinates.
int textX = ( ( buttonWidth - textWidth ) / 2 );
int textY = ( ( buttonHeight - textHeight ) / 2 );
/*
* If the mouse is clicked and is over the button, i.e. the button is 'down', the text must be
* moved a bit down and left.
* To control this, we add a MouseListener and a MouseMoveListener on our button.
* On the MouseListener, we change the mouseDown flag on the mouseDown and mouseUp methods.
* On the MouseMoveListener, we change the mouseOver flag on the mouseMove method.
*/
if ( mouseDown && mouseOver ) {
textX++;
textY++;
}
// Draw the new text.
event.gc.drawText( getText(), textX, textY );
// If button has focus, draw the dotted border on it.
if ( isFocusControl() ) {
int[] dashes = { 1, 1 };
evento.gc.setLineDash( dashes );
evento.gc.drawRectangle( 3, 3, buttonWidth - 8, buttonHeight - 8 );
}
}
In the end, I decided to implement it as a custom Composite with a checkbox/radio button and a label. Not ideal, but I'll have to make do.

Resources