How to transform mouse location in isometric tiling map? - algorithm

So I've managed myself to write the first part (algorithm) to calculate each tile's position where should it be placed while drawing this map (see bellow). However I need to be able to convert mouse location to the appropriate cell and I've been almost pulling my hair off because I can't figure out a way how to get the cell from mouse location. My concern is that it involves some pretty high math or something i'm just something easy i'm not capable to notice.
For example if the mouse position is 112;35 how do i calculate/transform it to to get that the cell is 2;3 at that position?
Maybe there is some really good math-thinking programmer here who would help me on this or someone who knows how to do it or can give some information?
var cord:Point = new Point();
cord.x = (x - 1) * 28 + (y - 1) * 28;
cord.y = (y - 1) * 14 + (x - 1) * (- 14);
Speaking of the map, each cell (transparent tile 56x28 pixels) is placed in the center of the previous cell (or at zero position for the cell 1;1), above is the code I use for converting cell-to-position. I tried lot of things and calculations for position-to-cell but each of them failed.
Edit:
After reading lot of information it seems that using off screen color map (where colors are mapped to tiles) is the fastest and most efficient solution?

I know this is an old post, but I want to update this since some people might still look for answers to this issue, just like I was earlier today. However, I figured this out myself. There is also a much better way to render this so you don't get tile overlapping issues.
The code is as simple as this:
mouse_grid_x = floor((mouse_y / tile_height) + (mouse_x / tile_width));
mouse_grid_y = floor((-mouse_x / tile_width) + (mouse_y / tile_height));
mouse_x and mouse_y are mouse screen coordinates.
tile_height and tile_width are actual tile size, not the image itself. As you see on my example picture I've added dirt under my tile, this is just for easier rendering, actual size is 24 x 12. The coordinates are also "floored" to keep the result grid x and y rounded down.
Also notice that I render these tiles from the y=0 and x=tile_with / 2 (red dot). This means my 0,0 actually starts at the top corner of the tile (tilted) and not out in open air. See these tiles as rotated squares, you still want to start from the 0,0 pixel.
Tiles will be rendered beginning with the Y = 0 and X = 0 to map size. After first row is rendered you skip a few pixels down and to the left. This will make the next line of tiles overlap the first one, which is a great way to keep the layers overlapping coorectly. You should render tiles, then whatever in on that tile before moving on to the next.
I'll add a render example too:
for (yy = 0; yy < map_height; yy++)
{
for (xx = 0; xx < map_width; xx++)
{
draw tiles here with tile coordinates:
tile_x = (xx * 12) - (yy * 12) - (tile_width / 2)
tile_y = (yy * 6) + (xx * 6)
also draw whatever is on this tile here before moving on
}
}

(1) x` = 28x -28 + 28y -28 = 28x + 28y -56
(2) y` = -14x +14 +14y -14 = -14x + 14y
Transformation table:
[x] [28 28 -56 ] = [x`]
[y] [-14 14 0 ] [y`]
[1] [0 0 1 ] [1 ]
[28 28 -56 ] ^ -1
[-14 14 0 ]
[0 0 1 ]
Calculate that with a plotter ( I like wims )
[1/56 -1/28 1 ]
[1/56 1/28 1 ]
[0 0 1 ]
x = 1/56*x` - 1/28y` + 1
y = 1/56*x` + 1/28y` + 1

I rendered the tiles like above.
the sollution is VERY simple!
first thing:
my Tile width and height are both = 32
this means that in isometric view,
the width = 32 and height = 16!
Mapheight in this case is 5 (max. Y value)
y_iso & x_iso == 0 when y_mouse=MapHeight/tilewidth/2 and x_mouse = 0
when x_mouse +=1, y_iso -=1
so first of all I calculate the "per-pixel transformation"
TileY = ((y_mouse*2)-((MapHeight*tilewidth)/2)+x_mouse)/2;
TileX = x_mouse-TileY;
to find the tile coordinates I just devide both by tilewidth
TileY = TileY/32;
TileX = TileX/32;
DONE!!
never had any problems!

I've found algorithm on this site http://www.tonypa.pri.ee/tbw/tut18.html. I couldn't get it to work for me properly, but I change it by trial and error to this form and it works for me now.
int x = mouse.x + offset.x - tile[0;0].x; //tile[0;0].x is the value of x form witch map was drawn
int y = mouse.y + offset.y;
double _x =((2 * y + x) / 2);
double _y= ((2 * y - x) / 2);
double tileX = Math.round(_x / (tile.height - 1)) - 1;
double tileY = Math.round(_y / (tile.height - 1));
This is my map generation
for(int x=0;x<max_X;x++)
for(int y=0;y<max_Y;y++)
map.drawImage(image, ((max_X - 1) * tile.width / 2) - ((tile.width - 1) / 2 * (y - x)), ((tile.height - 1) / 2) * (y + x));

One way would be to rotate it back to a square projection:
First translate y so that the dimensions are relative to the origin:
x0 = x_mouse;
y0 = y_mouse-14
Then scale by your tile size:
x1 = x/28; //or maybe 56?
y1 = y/28
Then rotate by the projection angle
a = atan(2/1);
x_tile = x1 * cos(a) - y1 * sin(a);
y_tile = y1 * cos(a) + x1 * sin(a);
I may be missing a minus sign, but that's the general idea.

Although you didn't mention it in your original question, in comments I think you said you're programming this in Flash. In which case Flash comes with Matrix transformation functions. The most robust way to convert between coordinate systems (eg. to isometric coordinates) is using Matrix transformations:
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/geom/Matrix.html
You would want to rotate and scale the matrix in the inverse of how you rotated and scaled the graphics.

Related

HEALPix with texture UV mapping

I found an implementation of the HEALpix algorithm this is the dokumentation
And the output looks very nice.
The following images show the latitude / longitude conversion to HEALpix areas.
The x-axe goes from 0 to 2 * pi. The y-axe goes from 0 to pi. The grey color represents the HEALpix pixel encoded in grey.
Nside = 1
Nside = 2
Nside = 4
Nside = 8
The different grey values are the IDs for the texture I have to use. That means, that each HEALpix pixel represents one texture. The missing part is the UV mapping within each of the HEALpix pixels like shown below:
nSide = 1 with UV mapping
Right now I am using the function:
void ang2pix_ring( const long nside, double theta, double phi, long *ipix)
Which gives me the correct texture ID. But I've no idea how to calculate the UV mapping for each HEALpix pixel.
Is there a way to calculate all four corners in lat/lon coordinates of a HEALpix pixel? Or even better a direct calculation to the UV coordinates?
BTW: I am using the RING scheme. But if the NESTED scheme is simpler to calculate I also would change to that.
After a lot of research I came to a solution for this problem:
First of all, I've changed the scheme to NESTED. With the NESTED scheme and a very high nSide value (8192), the returned value from the
void ang2pix_ring( const long nside, double theta, double phi, long *ipix)
function gives back a long value where the UV coordinates can be read out in the following way:
Bit 26 till 30 represents the level 0 (only the 12 HEALPix pixels).
By using higher levels, the Bits from 30 till 26 - (level * 2) represents the HEALPix pixels.
The leftover 26 - (level * 2) - 1 till bit 1 encode the UV texture-coordinates in the following way:
Each second odd bit shrink together represents the U coordinate and the even once represents the V coordinate.
To normalize these UV-coordinates the responding shrinked values need to be divided by the value of pow(2, (26 - level * 2) / 2).
Code says more than 1000 words:
unsigned long ignoreEverySecondBit(unsigned long value, bool odd, unsigned int countBits)
{
unsigned long result = 0;
unsigned long mask = odd == true ? 0b1 : 0b10;
countBits = countBits / 2;
for (int i = 0; i < countBits; ++i)
{
if ((value & mask) != 0)
{
result += std::pow(2, i);
}
mask = mask << 2;
}
return result;
}
//calculate the HEALPix values:
latLonToHealPixNESTED(nSide, theta, phi, &pix);
result.level = level;
result.texture = pix >> (26 - level * 2);
result.u = static_cast<float>(ignoreEverySecondBit(pix, true, 26 - level * 2));
result.v = static_cast<float>(ignoreEverySecondBit(pix, false, 26 - level * 2));
result.u = result.u / pow(2, (26 - level * 2) / 2);
result.v = result.v / pow(2, (26 - level * 2) / 2);
And of cause a few images to show the results. The blue value represents the textureID, the red value represents the U-coordinate and the green value represents the V-coordinate:
Level 0
Level 1
Level 2
Level 3
Level 4
I hope this solution will help others too.

How is this ray casting algorithm flawed?

Matrix operations performed on the GPU can be pretty hard to debug because GPU operations don't really allow for console logs.
I've written one designed for a real time 2D rendering engine based on a very simple form of I guess what could be called ray casting and am having trouble figuring out what's wrong with it (it's outputting [0,0,0,255,0,0,0,255,...] instead of populating colors).
this.thread.x is the index of the current unit (color channel) in the matrix being operated on.
scene is a buffer made up of 6-unit clumps, each value containing, in order:
The type of entity, always 1 for "sprite" in this case.
The sprite ID, corresponding the the index in this.constants.textures containing the buffer for the entity's sprite.
X offset, the left edge of the sprite
Y offset, the top edge of the sprite
width of the sprite
height of the sprite
bufferWidth is the width of the render area multiplied by 4 channels.
this.constants.textures is an array containing buffers of each sprite which the sprite IDs from the scene refer to.
Note: For those curious, this is being done with GPU.js, a JavaScript lib that converts a JS func into GLSL code to be run via WebGL.
function(scene, sceneLength, bufferWidth) {
var channel = this.thread.x % 4;
if (channel === 3) {
return 255;
}
var x = this.thread.x % bufferWidth;
var y = Math.floor(this.thread.x / bufferWidth);
for (let i1 = 0; i1 < sceneLength; i1 += 6) {
var id = scene[i1 + 1];
var x1 = scene[i1 + 2];
var y1 = scene[i1 + 3];
var w1 = scene[i1 + 4];
var h1 = scene[i1 + 5];
var r1 = scene[i1 + 6];
var offsetX1 = x1 - x;
if (offsetX1 > 0 && offsetX1 < w1) {
var offsetY1 = y1 - y;
if (offsetY1 > 0 && offsetY1 < h1) {
var c1 = offsetY1 * w1 * 4 + offsetX1 * 4;
var c1R = c1 - (c1 % 4);
var c1A = c1R + 3;
if (this.constants.textures[id][c1A] != 0) {
return this.constants.textures[id][c1];
}
}
}
}
return 0;
}
Explanation for the concept I'm trying to implement:
With a matrix operation, when you want to draw a sprite if you were to perform a pass on the entire render area, you'd be doing far more work than necessary. If you break the rendering area down into chunks and only update the sections involved in the sprite being drawn, that would be a fairly decent way to do it. It would certainly be good enough for real time game rendering. This would be a multi-pass approach, where sprites are rendered one at a time.
Alternatively, for what seems to me to be the most optimal approach possible, instead of that, we can use a single-pass approach that performs a single matrix operation for the entire rendering area, evaluating for each color channel what should be there based on doing a very basic form of collision detection with each sprite in the scene and the relevant pixel in that sprite.
You're calculating your sprite offsets backwards, the calculations should be:
var offsetX1 = x - x1;
and
var offsetY1 = y - y1;
The offsets should increase as x and y increase (assuming the sprite co-ordinates have the same co-ordinate system as the screen co-ordinates), so you shouldn't be subtracting x and y.

Zooming/scaling a tiled image anchoring the zoom point to the mouse cursor

I've got a project where I'm designing an image viewer for tiled images. Every image tile is 256x256 pixels. For each level of scaling, I'm increasing the size of each image by 5%. I represent the placement of the tiles by dividing the screen into tiles the same size as each image. An offset is used to precicely place each image where needed. When the scaling reaches a certain point(1.5), I switch over to a new layer of images that altogether has a greater resolution than the previous images. The zooming method itself looks like this:
def zoomer(self, mouse_pos, zoom_in): #(tuple, bool)
x, y = mouse_pos
x_tile, y_tile = x / self.tile_size, y / self.tile_size
old_scale = self.scale
if self.scale > 0.75 and self.scale < 1.5:
if zoom_in:
self.scale += SCALE_STEP # SCALE_STEP = 5% = 0.05
ratio = (SCALE_STEP + 1)
else:
self.scale -= SCALE_STEP
ratio = 1 / (SCALE_STEP + 1)
else:
if zoom_in:
self.zoom += 1
self.scale = 0.8
ratio = (SCALE_STEP + 1)
else:
self.zoom -= 1
self.scale = 1.45
ratio = 1 / (SCALE_STEP + 1)
# Results in x/y lengths of the relevant full image
x_len = self.size_list[self.levels][0] / self.power()
y_len = self.size_list[self.levels][1] / self.power()
# Removing extra pixel if present
x_len = x_len - (x_len % 2)
y_len = y_len - (y_len % 2)
# The tile's picture coordinates
tile_x = self.origo_tile[0] + x_tile
tile_y = self.origo_tile[1] + y_tile
# The mouse's picture pixel address
x_pic_pos = (tile_x * self.tile_size) -
self.img_x_offset + (x % self.tile_size)
y_pic_pos = (tile_y * self.tile_size) -
self.img_y_offset + (y % self.tile_size)
# Mouse percentile placement within the image
mouse_x_percent = (x_pic_pos / old_scale) / x_len
mouse_y_percent = (y_pic_pos / old_scale) / y_len
# The mouse's new picture pixel address
new_x = (x_len * self.scale) * mouse_x_percent
new_y = (y_len * self.scale) * mouse_y_percent
# Scaling tile size
self.tile_size = int(TILE_SIZE * self.scale)
# New mouse screen tile position
new_mouse_x_tile = x / self.tile_size
new_mouse_y_tile = y / self.tile_size
# The mouse's new tile address
new_tile_x = new_x / self.tile_size
new_tile_y = new_y / self.tile_size
# New tile offsets
self.img_x_offset = (x % self.tile_size) - int(new_x % self.tile_size)
self.img_y_offset = (y % self.tile_size) - int(new_y % self.tile_size)
# New origo tile
self.origo_tile = (int(new_tile_x) - new_mouse_x_tile,
int(new_tile_y) - new_mouse_y_tile)
Now, the issue arising from this is that the mouse_.._percent variables never seem to match up with the real position. For testing purposes, I feed the method with a mouse position centered in the middle of the screen and the picture centered in the middle too. As such, the resulting mouse_.._percent variable should, in a perfect world, always equal 50%. For the first level, it does, but quickly wanders off when scaling. By the time I reach the first zoom breakpoint (self.scale == 1.5), the position has drifted to x = 48%, y = 42%.
The self.origo_tile is a tuple containing the x/y coordinate for the tile to be drawn on screen tile (0, 0)
I've been staring at this for hours, but can't seen to find a remedy for it...
How the program works:
I apologize that I didn't have enough time to apply this to your code, but I wrote the following zooming simulator. The program allows you to zoom the same "image" multiple times, and it outputs the point of the image that would appear in the center of the screen, along with how much of the image is being shown.
The code:
from __future__ import division #double underscores, defense against the sinister integer division
width=256 #original image size
height=256
posx=128 #original display center, relative to the image
posy=128
while 1:
print "Display width: ", width
print "Display height: ", height
print "Center X: ", posx
print "Center Y: ", posy
anchx = int(raw_input("Anchor X: "))
anchy = int(raw_input("Anchor Y: "))
zmag = int(raw_input("Zoom Percent (0-inf): "))
zmag /= 100 #convert from percent to decimal
zmag = 1/zmag
width *= zmag
height *= zmag
posx = ((anchx-posx)*zmag)+posx
posy = ((anchy-posy)*zmag)+posy
Sample output:
If this program outputs the following:
Display width: 32.0
Display height: 32.0
Center X: 72.0
Center Y: 72.0
Explanation:
This means the zoomed-in screen shows only a part of the image, that part being 32x32 pixels, and the center of that part being at the coordinates (72,72). This means on both axes it is displaying pixels 56 - 88 of the image in this specific example.
Solution/Conclusion:
Play around with that program a bit, and see if you can implement it into your own code. Keep in mind that different programs move the Center X and Y differently, change the program I gave if you do not like how it works already (though you probably will, it's a common way of doing it). Happy Coding!

Processing - creating circles from current pixels

I'm using processing, and I'm trying to create a circle from the pixels i have on my display.
I managed to pull the pixels on screen and create a growing circle from them.
However i'm looking for something much more sophisticated, I want to make it seem as if the pixels on the display are moving from their current location and forming a turning circle or something like this.
This is what i have for now:
int c = 0;
int radius = 30;
allPixels = removeBlackP();
void draw {
loadPixels();
for (int alpha = 0; alpha < 360; alpha++)
{
float xf = 350 + radius*cos(alpha);
float yf = 350 + radius*sin(alpha);
int x = (int) xf;
int y = (int) yf;
if (radius > 200) {radius =30;break;}
if (c> allPixels.length) {c= 0;}
pixels[y*700 +x] = allPixels[c];
updatePixels();
}
radius++;
c++;
}
the function removeBlackP return an array with all the pixels except for the black ones.
This code works for me. There is an issue that the circle only has the numbers as int so it seems like some pixels inside the circle won't fill, i can live with that. I'm looking for something a bit more complex like I explained.
Thanks!
Fill all pixels of scanlines belonging to the circle. Using this approach, you will paint all places inside the circle. For every line calculate start coordinate (end one is symmetric). Pseudocode:
for y = center_y - radius; y <= center_y + radius; y++
dx = Sqrt(radius * radius - y * y)
for x = center_x - dx; x <= center_x + dx; x++
fill a[y, x]
When you find places for all pixels, you can make correlation between initial pixels places and calculated ones and move them step-by-step.
For example, if initial coordinates relative to center point for k-th pixel are (x0, y0) and final coordinates are (x1,y1), and you want to make M steps, moving pixel by spiral, calculate intermediate coordinates:
calc values once:
r0 = Sqrt(x0*x0 + y0*y0) //Math.Hypot if available
r1 = Sqrt(x1*x1 + y1*y1)
fi0 = Math.Atan2(y0, x0)
fi1 = Math.Atan2(y1, x1)
if fi1 < fi0 then
fi1 = fi1 + 2 * Pi;
for i = 1; i <=M ; i++
x = (r0 + i / M * (r1 - r0)) * Cos(fi0 + i / M * (fi1 - fi0))
y = (r0 + i / M * (r1 - r0)) * Sin(fi0 + i / M * (fi1 - fi0))
shift by center coordinates
The way you go about drawing circles in Processing looks a little convoluted.
The simplest way is to use the ellipse() function, no pixels involved though:
If you do need to draw an ellipse and use pixels, you can make use of PGraphics which is similar to using a separate buffer/"layer" to draw into using Processing drawing commands but it also has pixels[] you can access.
Let's say you want to draw a low-res pixel circle circle, you can create a small PGraphics, disable smoothing, draw the circle, then render the circle at a higher resolution. The only catch is these drawing commands must be placed within beginDraw()/endDraw() calls:
PGraphics buffer;
void setup(){
//disable sketch's aliasing
noSmooth();
buffer = createGraphics(25,25);
buffer.beginDraw();
//disable buffer's aliasing
buffer.noSmooth();
buffer.noFill();
buffer.stroke(255);
buffer.endDraw();
}
void draw(){
background(255);
//draw small circle
float circleSize = map(sin(frameCount * .01),-1.0,1.0,0.0,20.0);
buffer.beginDraw();
buffer.background(0);
buffer.ellipse(buffer.width / 2,buffer.height / 2, circleSize,circleSize);
buffer.endDraw();
//render small circle at higher resolution (blocky - no aliasing)
image(buffer,0,0,width,height);
}
If you want to manually draw a circle using pixels[] you are on the right using the polar to cartesian conversion formula (x = cos(angle) * radius, y = sin(angle) * radius).Even though it's focusing on drawing a radial gradient, you can find an example of drawing a circle(a lot actually) using pixels in this answer

How to use Bresenham's line drawing algorithm with sub pixel bias?

Bresenham's line drawing algorithm is well known and quite simple to implement.
While there are more advanced ways to draw anti-ailesed lines, Im interested in writing a function which draws a single pixel width non anti-aliased line, based on floating point coordinates.
This means while the first and last pixels will remain the same, the pixels drawn between them will have a bias based on the sub-pixel position of both end-points.
In principle this shouldn't be all that complicated, since I assume its possible to use the sub-pixel offsets to calculate an initial error value to use when plotting the line, and all other parts of the algorithm remain the same.
No sub pixel offset:
X###
###X
Assuming the right hand point has a sub-pixel position close to the top, the line could look like this:
With sub pixel offset for example:
X######
X
Is there a tried & true method of drawing a line that takes sub-pixel coordinates into account?
Note:
This seems like a common operation, I've seen OpenGL drivers take this into account for example - using GL_LINE, though from a quick search I didn't find any answers online - maybe used wrong search terms?
At a glance this question looks like it might be a duplicate of: Precise subpixel line drawing algorithm (rasterization algorithm)However that is asking about drawing a wide line, this is asking about offsetting a single pixel line.
If there isn't some standard method, I'll try write this up to post as an answer.
Having just encountered the same challenge, I can confirm that this is possible as you expected.
First, return to the simplest form of the algorithm: (ignore the fractions; they'll disappear later)
x = x0
y = y0
dx = x1 - x0
dy = y1 - y0
error = -0.5
while x < x1:
if error > 0:
y += 1
error -= 1
paint(x, y)
x += 1
error += dy/dx
This means that for integer coordinates, we start half a pixel above the pixel boundary (error = -0.5), and for each pixel we advance in x, we increase the ideal y coordinate (and therefore the current error) by dy/dx.
First let's see what happens if we stop forcing x0, y0, x1 and y1 to be integers: (this will also assume that instead of using pixel centres, the coordinates are relative to the bottom-left of each pixel1, since once you support sub-pixel positions you can simply add half the pixel width to the x and y to return to pixel-centred logic)
x = x0
y = y0
dx = x1 - x0
dy = y1 - y0
error = (0.5 - (x0 % 1)) * dy/dx + (y0 % 1) - 1
while x < x1:
if error > 0:
y += 1
error -= 1
paint(x, y)
x += 1
error += dy/dx
The only change was the initial error calculation. The new value comes from simple trig to calculate the y coordinate when x is at the pixel centre. It's worth noting that you can use the same idea to clip the line's start position to be within some bound, which is another challenge you'll likely face when you want to start optimising things.
Now we just need to convert this into integer-only arithmetic. We'll need some fixed multiplier for the fractional inputs (scale), and the divisions can be handled by multiplying them out, just as the standard algorithm does.
# assumes x0, y0, x1 and y1 are pre-multiplied by scale
x = x0
y = y0
dx = x1 - x0
dy = y1 - y0
error = (scale - 2 * (x0 % scale)) * dy + 2 * (y0 % scale) * dx - 2 * dx * scale
while x < x1:
if error > 0:
y += scale
error -= 2 * dx * scale
paint(x / scale, y / scale)
x += scale
error += 2 * dy * scale
Note that x, y, dx and dy keep the same scaling factor as the input variables (scale), whereas error has a more complex scaling factor: 2 * dx * scale. This allows it to absorb the division and fraction in its original formulation, but means we need to apply the same scale everywhere we use it.
Obviously there's a lot of room to optimise here, but that's the basic algorithm. If we assume scale is a power-of-two (2^n), we can start to make things a little more efficient:
dx = x1 - x0
dy = y1 - y0
mask = (1 << n) - 1
error = (2 * (y0 & mask) - (2 << n)) * dx - (2 * (x0 & mask) - (1 << n)) * dy
x = x0 >> n
y = y0 >> n
while x < (x1 >> n):
if error > 0:
y += 1
error -= 2 * dx << n
paint(x, y)
x += 1
error += 2 * dy << n
As with the original, this only works in the (x >= y, x > 0, y >= 0) octant. The usual rules apply for extending it to all cases, but note that there are a few extra gotchyas due to the coordinates no-longer being centred in the pixel (i.e. reflections become more complex).
You'll also need to watch out for integer overflows: error has twice the precision of the input variables, and a range of up to twice the length of the line. Plan your inputs, precision, and variable types accordingly!
1: Coordinates are relative to the corner which is closest to 0,0. For an OpenGL-style coordinate system that's the bottom left, but it could be the top-left depending on your particular scenario.
I had a similar problem, with the addition of needing sub-pixel endpoints, I also needed to make sure all pixels which intersect the line are drawn.
I'm not sure that my solution will be helpful to OP, both because its been 4+ years, and because of the sentence "This means while the first and last pixels will remain the same..." For me, that is actually a problem (More on that later). Hopefully this may be helpful to others.
I don't know if this can be considered to be Bresenham's algorithm, but it is awful similar. I'll explain it for the (+,+) quadrant. Lets say you wish to draw a line from point (Px,Py) to (Qx,Qy) over a grid of pixels with width W. Having a grid width W > 1 allows for sub-pixel endpoints.
For a line going in the (+,+) quadrant, the starting point is easy to calculate, just take the floor of (Px,Py). As you will see later, this only works if Qx >= Px & Qy >= Py.
Now you need to find which pixel to go to next. There are 3 possibilities: (x+1,y), (x,y+1), & (x+1,y+1). To make this decision, I use the 2D cross product defined as:
If this value is negative, vector b is right/clockwise of vector a.
If this value is positive, vector b is left/anti-clockwise of vector a.
If this value is zero vector b points in the same direction as vector a.
To make the decision on which pixel is next, compare the cross product between the line P-Q [red in image below] and a line between the point P and the top-right pixel (x+1,y+1) [blue in image below].
The vector between P & the top-right pixel can be calculated as:
So, we will use the value from the 2D cross product:
If this value is negative, the next pixel will be (x,y+1).
If this value is positive, the next pixel will be (x+1,y).
If this value is exactly zero, the next pixel will be (x+1,y+1).
That works fine for the starting pixel, but the rest of the pixels will not have a point that lies inside them. Luckily, after the initial point, you don't need a point to be inside the pixel for the blue vector. You can keep extending it like so:
The blue vector starts at the starting point of the line, and is updated to the (x+1,y+1) for every pixel. The rule for which pixel to take is the same. As you can see, the red vector is right of the blue vector. So, the next pixel will be the one right of the green pixel.
The value for the cross product needs updated for every pixel, depending on which pixel you took.
Add dx if the next pixel was (x+1), add dy if the pixel was (y+1). Add both if the pixel went to (x+1,y+1).
This process is repeated until it reaches the ending pixel, (Qx / W, Qy / W).
All combined this leads to the following code:
int dx = x2 - x2;
int dy = y2 - y1;
int local_x = x1 % width;
int local_y = y1 % width;
int cross_product = dx*(width-local_y) - dy*(width-local_x);
int dx_cross = -dy*width;
int dy_cross = dx*width;
int x = x1 / width;
int y = y1 / width;
int end_x = x2 / width;
int end_y = y2 / width;
while (x != end_x || y != end_y) {
SetPixel(x,y,color);
int old_cross = cross_product;
if (old_cross >= 0) {
x++;
cross_product += dx_cross;
}
if (old_cross <= 0) {
y++;
cross_product += dy_cross;
}
}
Making it work for all quadrants is a matter of reversing the local coordinates and some absolute values. Heres the code which works for all quadrants:
int dx = x2 - x1;
int dy = y2 - y1;
int dx_x = (dx >= 0) ? 1 : -1;
int dy_y = (dy >= 0) ? 1 : -1;
int local_x = x1 % square_width;
int local_y = y1 % square_width;
int x_dist = (dx >= 0) ? (square_width - local_x) : (local_x);
int y_dist = (dy >= 0) ? (square_width - local_y) : (local_y);
int cross_product = abs(dx) * abs(y_dist) - abs(dy) * abs(x_dist);
dx_cross = -abs(dy) * square_width;
dy_cross = abs(dx) * square_width;
int x = x1 / square_width;
int y = y1 / square_width;
int end_x = x2 / square_width;
int end_y = y2 / square_width;
while (x != end_x || y != end_y) {
SetPixel(x,y,color);
int old_cross = cross_product;
if (old_cross >= 0) {
x += dx_x;
cross_product += dx_cross;
}
if (old_cross <= 0) {
y += dy_y;
cross_product += dy_cross;
}
}
However there is a problem! This code will not stop in some cases. To understand why, you need to really look into exactly what conditions count as the intersection between a line and a pixel.
When exactly is a pixel drawn?
I said I need to make that all pixels which intersect a line need to be drawn. But there's some ambiguity in the edge cases.
Here is a list of all possible intersections in which a pixel will be drawn for a line where Qx >= Px & Qy >= Py:
A - If a line intersects the pixel completely, the pixel will be drawn.
B - If a vertical line intersects the pixel completely, the pixel will be drawn.
C - If a horizontal line intersects the pixel completely, the pixel will be drawn.
D - If a vertical line perfectly touches the left of the pixel, the pixel will be drawn.
E - If a horizontal line perfectly touches the bottom of the pixel, the pixel will be drawn.
F - If a line endpoint starts inside of a pixel going (+,+), the pixel will be drawn.
G - If a line endpoint starts exactly on the left side of a pixel going (+,+), the pixel will be drawn.
H - If a line endpoint starts exactly on the bottom side of a pixel going (+,+), the pixel will be drawn.
I - If a line endpoint starts exactly on the bottom left corner of a pixel going (+,+), the pixel will be drawn.
And here are some pixels which do NOT intersect the line:
A' - If a line obviously doesn't intersect a pixel, the pixel will NOT be drawn.
B' - If a vertical line obviously doesn't intersect a pixel, the pixel will NOT be drawn.
C' - If a horizontal line obviously doesn't intersect a pixel, the pixel will NOT be drawn.
D' - If a vertical line exactly touches the right side of a pixel, the pixel will NOT be drawn.
E' - If a horizontal line exactly touches the top side of a pixel, the pixel will NOT be drawn.
F' - If a line endpoint starts exactly on the top right corner of a pixel going in the (+,+) direction, the pixel will NOT be drawn.
G' - If a line endpoint starts exactly on the top side of a pixel going in the (+,+) direction, the pixel will NOT be drawn.
H' - If a line endpoint starts exactly on the right side of a pixel going in the (+,+) direction, the pixel will NOT be drawn.
I' - If a line exactly touches a corner of the pixel, the pixel will NOT be drawn. This applies to all corners.
Those rules apply as you would expect (just flip the image) for the other quadrants. The problem I need to highlight is when an endpoint lies exactly on the edge of a pixel. Take a look at this case:
This is like image G' above, except the y-axis is flipped because the Qy < Py. There are 4x4 red dots because W is 4, making the pixel dimensions 4x4. Each of the 4 dots are the ONLY endpoints a line can touch. The line drawn goes from (1.25, 1.0) to (somewhere).
This shows why it's incorrect (at least how I defined pixel-line intersections) to say the pixel endpoints can be calculated as the floor of the line endpoints. The floored pixel coordinate for that endpoint seems to be (1,1), but it is clear that the line never really intersects that pixel. It just touches it, so I don't want to draw it.
Instead of flooring the line endpoints, you need to floor the minimal endpoints, and ceil the maximal endpoints minus 1 across both x & y dimensions.
So finally here is the complete code which does this flooring/ceiling:
int dx = x2 - x1;
int dy = y2 - y1;
int dx_x = (dx >= 0) ? 1 : -1;
int dy_y = (dy >= 0) ? 1 : -1;
int local_x = x1 % square_width;
int local_y = y1 % square_width;
int x_dist = (dx >= 0) ? (square_width - local_x) : (local_x);
int y_dist = (dy >= 0) ? (square_width - local_y) : (local_y);
int cross_product = abs(dx) * abs(y_dist) - abs(dy) * abs(x_dist);
dx_cross = -abs(dy) * square_width;
dy_cross = abs(dx) * square_width;
int x = x1 / square_width;
int y = y1 / square_width;
int end_x = x2 / square_width;
int end_y = y2 / square_width;
// Perform ceiling/flooring of the pixel endpoints
if (dy < 0)
{
if ((y1 % square_width) == 0)
{
y--;
cross_product += dy_cross;
}
}
else if (dy > 0)
{
if ((y2 % square_width) == 0)
end_y--;
}
if (dx < 0)
{
if ((x1 % square_width) == 0)
{
x--;
cross_product += dx_cross;
}
}
else if (dx > 0)
{
if ((x2 % square_width) == 0)
end_x--;
}
while (x != end_x || y != end_y) {
SetPixel(x,y,color);
int old_cross = cross_product;
if (old_cross >= 0) {
x += dx_x;
cross_product += dx_cross;
}
if (old_cross <= 0) {
y += dy_y;
cross_product += dy_cross;
}
}
This code itself hasn't been tested, but it comes slightly modified from my GitHub project where it has been tested.
Let's assume you want to draw a line from P1 = (x1, y1) to P2 = (x2, y2) where all the numbers are floating point pixel coordinates.
Calculate the true pixel coordinates of P1 and P2 and paint them: P* = (round(x), round(y)).
If abs(x1* - x2*) <= 1 && abs(y1* - y2*) <= 1 then you are finished.
Decide whether it is a horizontal (true) or a vertical line (false): abs(x1 - x2) >= abs(y1 - y2).
If it is a horizontal line and x1 > x2 or if it is a vertical line and y1 > y2: swap P1 with P2 (and also P1* with P2*).
If it is a horizontal line you can get the y-coordinates for all the x-coordinates between x1* and x2* with the following formula:
y(x) = round(y1 + (x - x1) / (x2 - x1) * (y2 - y1))
If you have a vertical line you can get the x-coordinates for all the y-coordinates between y1* and y2* with this formula:
x(y) = round(x1 + (y - y1) / (y2 - y1) * (x2 - x1))
Here is a demo you can play around with, you can try different points on line 12.

Resources