Pixels in Direct2D - direct2d

The dark gray lines are supposed to be black and 1 pixel wide:
pRT->DrawLine(Point2F(100, 120), Point2F(300, 120), blackbrush, 1);
The light gray lines are supposed to be black and 0.5 pixel wide:
pRT->DrawLine(Point2F(120, 130), Point2F(280, 130), blackbrush, 0.5);
Instead, they are both 2 pixels wide. If I ask for 2 pixels wide, the line is black, but naturally 2 pixels wide.
The render target has the same size as the client area of the window. I would like pixel accuracy like in GDI, one coordinate = one pixel and pure colors...
Thanks.

Direct2D is rendering correctly. When you give it a pixel coordinate such as (100, 120), that refers to the top and left corner of the pixel element that spans from pixel coordinates (100, 120) to (101, 121) (top/left are inclusive, right/bottom are exclusive). Since it's a straight horizontal line you are effectively getting a filled rectangle from (99.5, 119.5) - (300.5, 120.5). Since the edges of this spill into adjacent pixels, that's why you're getting "2 pixel width" lines at the "wrong" brightness. You must think in terms of pixel coordinates (points with no area) and pixel elements (physical points on the screen with an area of 1x1, or just 1 of course).
If you want to draw a straight line from that covers the pixels (100, 120) to (300, 120), you should either use SemMike's suggestion of using aliased rendering (which is great for straight lines!), or you can use half-pixel offsets (because strokeWidth=1; for other strokeWidths, adjust by strokeWidth/2). Drawing from (100.5, 120.5) - (299.5, 120.5) with a stroke width of 1.0 will get you what you're looking for. That stroke extends around the pixel coordinates you specify, so you will get the "filled rectangle" over the pixel elements (100, 120) - (300, 121). And again, that's an exclusive range, so 'y=121' isn't actually filled, neither is x=300.
If you're wondering why this doesn't happen with something like GDI, it's because it doesn't do antialiased rendering, so everything always snaps to pixel elements. If you're wondering why this doesn't happen with WPF while using Shapes, it's because it uses layout rounding (UseLayoutRounding) and pixel snapping. Direct2D does not provide those services because it's a relatively low-level API.

You can play around with pRenderTarget->DrawLine(Point2F(100-0.5, 120-0.5), Point2F(300-0.5, 120-0.5), blackbrush, 1), but it becomes rapidly tricky. The simplest is:
pRenderTarget->SetAntialiasMode(D2D1_ANTIALIAS_MODE_ALIASED);
Hope it helps somebody...

Related

Find out if texture contains at least one black pixel in WebGL?

Is there a special webGL trick to check if a texture contains at least one black rgb pixel, without having to read pixels on CPU ?
To me, it seems that checking pixels on CPU is the only solution. In this case,
is there a way for example, to compress a high resolution texture to a 1x1 texture containing a single boolean color information, so that I only have to read one single pixel for performance reason.
Thanks !
Only idea I have is erm... at least complex.
Lets say we have texture with dimension N, then:
Make canvas 1x1 pixel large.
Create array of N*N points, each point with different [x,y] attributes representing pixel position we will look for.
In vertex shader do texture lookup based on the point position. If the color of that pixel is not black then discard;
Set point position [0,0]
In fragment shader simply draw black color (assuming we have white canvas) for point.
Then what you have is 1x1 canvas with black color if there were any black pixels, or white color if there werent any and you can simply read it with cpu.
The bottle neck in this case is second step = point array building, and also cpu-gpu communication. This will run faster only if you need to do a lot of reads in short time and if you know size of the texture before the application runs so you can reuse same buffer for points for all textures.
Just idea, not sure if it would work.
Make a render target, draw your texture into it using a shader that draws white if the pixel is rgb 0,0,0 and black otherwise. Let's assume is 1024x768 and now has 1 white pixel. Draw that into a texture 1/4 it's size. In this case 512x384. With linear filtering on if we have the worst case, just 1 white pixel then it will average to 0.25 (63). We can do that 2 more times first to 256x192, then again 128x96. That one original white (255) pixel will now be (3). So run the black/white shader again and repeat 64x48, 32x24, 16x12, run the black/white shader, then again 8x6, 4x4, 2x2, run the black/white shader. Then 1x1. Now check If that's not pure black there was at least 1 black pixel.
Instead of doing a 1/2 size reduction each time you could try reducing those 3 levels into one by averaging a bunch of pixels in a shader. 15x15 pixels for example would still leave something > 0 if only 1 pixel was white and the rest black. In that case starting at 1024x768 it would be
1024x768 -> 67x52 -> 5x4 -> 1x1
I have no idea if that would be faster or slower.

libgdx: Rotate a texture when drawing it with spritebatch

Im trying to rotate textures when I draw them. I figured it would make more sense to do this than to rotate the images 90 degrees in paint.net and save them in different files. I looked thought the api documentation for spritebatch drawing arguments but I just dont understand. There are a bunch of arguments such as srcX, srcY, originX and so on. Also i would like to know how to do the same for texture regions. Heres a link to the api documentation page:http://libgdx.badlogicgames.com/nightlies/docs/api/com/badlogic/gdx/graphics/g2d/SpriteBatch.html
Thank you!
again from the documentation, but copied here for ease of use and so I can explain a little better.
x - the x-coordinate in screen space
y - the y-coordinate in screen space
these two values represent the location to draw your texture in screen space (game space). Pretty self explanatory.
originX - the x-coordinate of the scaling and rotation origin relative to the screen space coordinates
originY - the y-coordinate of the scaling and rotation origin relative to the screen space coordinates
these two values represent the location where rotations (and scaling) happen from with respect to the screen space. So for instance, if you give the value 0, 0 here, the rotation and scaling will happen around one of the corners of your texture (the bottom left I believe), whereas if you give the center (width/2, height/2), the rotation and scaling would happen around the center of your texture (this is probably what you want for any "normal" rotations)
width - the width in pixels
height - the height in pixels
the dimensions for drawing your texture on screen.
scaleX - the scale of the rectangle around originX/originY in x
scaleY - the scale of the rectangle around originX/originY in y
values representing the scale of your rectangle, where values between 0 and 1 will shrink the rectangle, and values greater than 1 will expand the rectangle. Note that this is with respect to the origin you gave earlier, which means that if this is not the center the image may look distorted.
rotation - the angle of counter clockwise rotation of the rectangle around originX/originY
the angle to rotate the image by. Again, this is around the origin given earlier, so the rotation may not appear "correct" if the origin is not the center of the image
srcX - the x-coordinate in texel space
srcY - the y-coordinate in texel space
these two values are the starting location of the actual region of the image file (.png, .jpg, whatever) that you wish to use, in pixels. Basically the start of your image.
srcWidth - the source with in texels
srcHeight - the source height in texels
similarly, these two values are the width and height of the actual region of the image file you are using, in pixels.
flipX - whether to flip the sprite horizontally
flipY - whether to flip the sprite vertically
Finally, these two booleans are used to flip the image either horizontally or vertically.
Now you may notice that the similar method for drawing TextureRegions has no srcX, srcY, srcWidth, or srcHeight. This is because those are the values you give to a texture region when you create it from a texture.
Essentially what that means is that the command
//with TextureRegions
SpriteBatch.draw(textureRegion, x, y, originX, originY, width, height, scaleX, scaleY, rotation);
is equivalent to
//with Textures from TextureRegions
SpriteBatch.draw(textureRegion.getTexture(), x, y, originX, originY, width, height, scaleX, scaleY, rotation, textureRegion.getRegionX(), textureRegion.getRegionY(), textureRegion.getRegionWidth(), textureRegion.getRegionHeight(), false, false);

Cocoa Resolution Independent Button Graphic

I'm trying to create a graphic in Sketch (a vector-based graphic design application). I export to PDF and this is what my original graphic looks like:
But when I set it as the image of an NSButton, it gets drawn like this:
Why does this occur? The right and bottom edges in particular are altered a lot. I'm not sure if this is a Cocoa drawing issue or an issue with my original graphic.
The problem is with (mis)alignment with the pixel grid and anti-aliasing. It looks like you've scaled the image so that the borders on the left, right, and bottom are roughly one pixel in thickness. However, the right and bottom borders are straddling the boundary between pixels. The result is that they contribute half their "darkness" to the pixel on one side of the boundary and the other half to the pixel on the other side of the boundary.
You should tweak either the proportions of the image or the size at which you're drawing it to avoid that particular alignment. It looks as though it's being rendered as roughly 10.5 pixels wide. You want it to be either 10 pixels or 11 pixels wide, so the right edge corresponds more closely to a pixel column.

How to get a 1 pixel line with NSBezierPath?

I'm developing a custom control. One of the requirements is to draw lines. Although this works, I noticed that my 1 pixel wide lines do not really look like 1 pixel wide lines - I know, they're not really pixels but you know what I mean. They look more like two or three pixels wide. This becomes very apparent when I draw a dashed line with a 1 pixel dash and a 2 pixel gap. The 1 pixel dashes actually look like tiny lines in stead of dots.
I've read the Cocoa Drawing documentation and although Apple mentions the setLineWidth method, changing the line width to values smaller than 1.0 will only make the line look more vague and not thinner.
So, I suspect there's something else influencing the way my lines look.
Any ideas?
Bezier paths are drawn centered on their path, so if you draw a 1 pixel wide path along the X-coordinate, the line actually draws along Y-coordinates { -0.5, 0.5 } The solution is usually to offset the coordinate by 0.5 so that the line is not drawn in the sub pixel boundaries. You should be able to shift your bounding box by 0.5 to get sharper drawing behavior.
Francis McGrew already gave the right answer, but since I did a presentation on this once, I thought I'd add some pictures.
The problem here is that coordinates in Quartz lie at the intersections between pixels. This is fine when filling a rectangle, because every pixel that lies inside the coordinates gets filled. But lines are technically (mathematically!) invisible. To draw them, Quartz has to actually draw a rectangle with the given line width. This rectangle is centered over the coordinates:
So when you ask Quartz to stroke a rectangle with integral coordinates, it has the problem that it can only draw whole pixels. But here you see that we have half pixels. So what it does is it averages the color. For a 50% black (the line color) and 50% white (the background) line, it simply draws each pixel in grey:
This is where your washed-out drawings come from. The fix is now obvious: Don't draw between pixels, and you achieve that by moving your points by half a pixel, so your coordinate is centered over the desired pixel:
Now of course just offsetting may not be what you wanted. Because if you compare the filled variant to the stroked one, the stroke is one pixel larger towards the lower right. If you're e.g. clipping to the rectangle, this will cut off the lower right:
Since people usually expect the rectangle to stroke inside the specified rectangle, what you usually do is that you offset by 0.5 towards the center, so the lower right effectively moves up one pixel. Alternately, many drawing apps offset by 0.5 away from the center, to avoid overlap between the border and the fill (which can look odd when you're drawing with transparency).
Note that this only holds true for 1x screens. 2x Retina screens actually exhibit this problem differently, because each of the pixels below is actually drawn by 4 Retina pixels, which means they can actually draw the half-pixels. However, you still have the same problem if you want a sharp 0.5pt line. Also, since Apple may in the future introduce other Retina screens where e.g. every pixel is made up of 9 Retina pixels (3x), or whatever, you should really not rely on this. Instead, there are now API calls to convert rectangles to "backing aligned", which does this for you, no matter whether you're running 1x, 2x, or a fictitious 3x.
PS - Since I went to the hassle of writing this all up, I've put this up on my web site: http://orangejuiceliberationfront.com/are-your-rectangles-blurry-pale-and-have-rounded-corners/ where I'll update and revise this description and add more images.
The answer is (buried) in the Apple Docs:
"To avoid antialiasing when you draw a one-point-wide horizontal or vertical line, if the line is an odd number of pixels in width, you must offset the position by 0.5 points to either side of a whole-numbered position"
Hidden in Drawing and Printing Guide for iOS: iOS Drawing Concepts, though nothing that specific to be found in the current, standard (OS X) Cocoa Drawing Guide..
As for the effects of invoking setDefaultLineWidth: the docs also state that:
"A width of 0 is interpreted as the thinnest line that can be rendered on a particular device. The actual rendered line width may vary from the specified width by as much as 2 device pixels, depending on the position of the line with respect to the pixel grid and the current anti-aliasing settings. The width of the line may also be affected by scaling factors specified in the current transformation matrix of the active graphics context."
I found some info suggesting that this is caused by anti aliasing. Turning anti aliasing off temporarily is easy:
[[NSGraphicsContext currentContext] setShouldAntialias: NO];
This gives a crisp, 1 pixel line. After drawing just switch it on again.
I tried the solution suggested by Francis McGrew by offsetting the x coordinate with 0.5, however that did not make any difference to the appearance of my line.
EDIT:
To be more specific, I changed x and y coordinates individually and together with an offset of 0.5.
EDIT 2:
I must have done something wrong, as changing the coordinates with an offset of 0.5 actually does work. The end result is better than the one obtained by switching off the anti aliasing so I'll make Francis MsGrew's answer the accepted answer.

How do I add an outline to a 2d concave polygon?

I'm successfully drawing the convex polys which make up the following white concave shape.
The orange color is my attempt to add a uniform outline around the white shape. As you can see it's not so uniform. On some edges the orange doesn't show at all.
Evidently using...
glScalef(1.1, 1.1, 0.0);
... to draw a slightly larger orange shape before I drew the white shape wasn't the way to go.
I just have a nagging feeling I'm missing a more simple way to do this.
Note that the white part is going to be mapped with a texture which has areas of transparency, so the orange part needs to be behind the white shapes too, not just surrounding them.
Also, I'm using a parallel projection matrix, that's why glScalef's z is set to 0.0 - reminds me there is no perspective scaling.
Any ideas? Thanks!
Nope, you wont be going anywhere with glScale in this case. Possible options are
a) construct an extruded polygon from the original one (possibly rounding sharp corners)
b) draw the polygon with GL_LINES and set glLineWidth to your desired outline width (in fact you might want to draw the outline with 2x width first)
The first approach will generate CPU load, the second one might slow down rendering significantly AFAIK.
You can displace your polygon in the 8 directions of the compass.
You can have a look at this link: http://simonschreibt.de/gat/cell-shading/
It's a nice trick, and might do the job
Unfortunately there is no simple way to get an outline of consistent width - you just have to do the maths:
For each edge: calculate the normal, scale to the desired width, and add to the edge vertices to get a line segment on the new expanded edge
Calculate the intersection of the lines through two adjacent segments to find the expanded vertex positions
A distinct answer from those offered to date, posted just for interest; if you're in GLES 2.0 have access to shaders then you could render the source polygon to a framebuffer with a texture bound as the colour renderbuffer, then do a second parse to write to the screen (so you're using the image of the white polygon as the input texture and running a post-processing pixel shader to every pixel on the screen) with a shader that obeys the following logic for an outline of thickness q:
if the input is white then output a white pixel
if the input pixel is black then sample every pixel within a radius of q from the current pixel; if any one of them is white then output an orange pixel, otherwise output a black pixel
In practise you'd spend an awful lot on texture sampling and probably turn that into the bottleneck. And they'd be mostly dependent reads, which are bad for the pipeline on lots of GPUs — including the PowerVR SGX that powers the overwhelming majority of OpenGL ES 2.0 devices.
EDIT: actually, you could speed this up substantially; if your radius is q then have the hardware generate mip maps for your framebuffer object, take the first one for which the output pixels are at least q by q in the source image. You've then essentially got a set of bins that'll be pure black if there were no bits of the polygon in that region and pure white if that area was entirely internal to the polygon. For each output fragment that you're considering might be on the border you can quite possibly just straight to a conclusion of definitely in or definitely out and beyond the border based on four samples of the mipmap.

Resources