problem with a very simple tile based game - tiles

I am trying to create a pacman-like game. I have an array that looks like this:
array:
1111111111111
1000000000001
1111110111111
1000000000001
1111111111111
1 = Wall, 0 = Empty space
I use this array to draw tiles that are 16x16 in size.
The Game character is 32x32.
Initially I represented the character's position in array indexes, [1,1] etc.
I would update his position if array[character.new_y][charater.new_x] == 0
Then I translated these array coordinates to pixels, [y*16, x*16] to draw him.
He was lining up nicely, wouldn't go into walls, but I noticed that since I was updating him by 16 pixels each, he was moving very fast.
I decided to do it in reverse, to store the game character's position in pixels instead, so that he could use less than 16 pixels per move.
I thought that a simple if statement such as this:
if array[(character.new_pixel_y)/16][(character.new_pixel_x)/16] == 0
would prevent him from going into walls, but unfortunately he eats a bit of the bottom and right side walls.
Any ideas how would I properly translate pixel position to the array indexes? I guess this is something simple, but I really can't figure it out :(

you appear to have transposed your 2D array. Was that on purpose?
(is it array[x][y] or array[y][x]?)
also, a game character of double the tile size would not fit in the example map array given!
edit:
if your character eats into the bottom and right by exactly half a tile and he/she doesn't overlap the top & right, then you need to offset your character by half a tile. (+/- 8 pixels to both x & y)
so when you, "translated these array coordinates to pixels, [y*16, x*16] to draw him."
you should change this to:
[(y*16)-8, (x*16)-8] to draw him when he is 32x32.
keep storing his position as [y*16, x*16], to follow the 16x convention.
this would be easier for you if you kept everything 16x16 first!
you should also store the tile_size_x & tile_size_y as a constant.

Related

understanding the display of the pixels on the screen

I'm sorry if this is a stupid question, but I want to make sure that I'm right or not.
suppose we have an 8x8 pixel screen and we want to represent a 2x2 square, a pixel can be black - 1 and white - 0. I would imagine this as an 8x8 matrix
[[0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0],
[0,0,0,1,1,0,0,0],
[0,0,0,1,1,0,0,0],
[0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0]]
using this matrix, we paint over the pixels and update them (for example) every second. we also have the coordinates of the pixels representing the square : (4,4) (4,5) (5,4) (5,5) and if we want to move the square we add 1 to x part of coordinate.
is it true or not?
Graphics Rendering is a complex mesh of art, mathematics, and hardware, assuming you're asking about how the screen actually works instead of a pet problem on simulating displays.
The buffer you described in the question is the interface which software uses to tell the hardware (video card) what to draw on the screen, and how it is actually done is in the realm of hardware. Hence, the logic for manipulating graphics objects (things you want drawn) is separate from the rendering process itself. Your program tells the buffer which pixels you want to update, and that's all; this can be done as often as you like, regardless of whether the hardware actually manages to flush its buffers onto the screen.
The software would be responsible for sorting out what exactly to draw on the screen; this is usually handled on multiple logical levels. Higher levels would construct a virtual worldspace for your objects and determine their interactions and attributes (position, velocity, collision, etc.), as well as a camera to determine the FOV the screen should display (if your world is 3D). Lower levels would then figure out the actual pixel values to write to the buffer, based on the camera FOV (3D), or just plain pixel coordinates after applying the desired transformations (rotation, shear, resize, etc.) to the associated image (2D).
It should be noted that virtual worldspace coordinates do not necessarily reflect pixel coordinates, even in 2D worlds. I'm not an expert on this subject, frankly, but I suspect it'll be easier if you first determine how far you want the object to move in virtual space first, and then apply the necessary transformations to show the results in a viewing window with customizable dimensions.
In short, you probably don't want to 'add 1 to x' when you want to move something on screen; you move it in a high abstraction layer, and then draw the results. This will save you a lot of trouble, especially if you have a complex scene with all kinds of stuff and a background.
Assuming you want to move a group of pixels to the right, then yes, all you need to do is identify the group of pixels and add 1 to their X coordinate. Of course you need to fill in the vacated spots with zeroes, otherwise that would have been a copy operation.
Keep in mind, my answer is a bit naive in the sense that when you reach the rightmost boundary, you have to wrap.

Digital Image Analysis Bilinear Interpolation

I am trying to understand the concept of bilinear interpolation. For example in the case where bilinear interpolation is used to rotate an image (let's say by 45 degrees), and then we rotate it back by the same amount. Is the resulting image the same as the original?
What about when an image is scaled up by a factor c, and then scaled down by the same factor c, is the resulting image the same as the original image?
In general, you will not get back the same values. I'll try and explain empirically...
Imagine a white rectangle on a black background, that will only have values of 255 (white) and 0 (black) in it. When you rotate it, the pixels on the edges of the rectangle will fall between pixels in the new image. At that point, you will end up interpolating between 0-255 and get some entirely new value, say 172. And now you immediately have a problem because bilinear interpolation has introduced a new value that wasn't in the original image and when you rotate back, you will end up interpolating between that new 172 and 255 or 0, which will give you yet another new value not present in the original image.
I hope that helps - it is the reason why you should use Nearest Neighbour interpolation when your pixels represent say classes in a Supervised Classification. You start off with water in class 0 and sand on the beach in class 17 beside it, and if you use Bilinear Interpolation to resize or geo-correct, you get a result of class 7 which might represent wheat - and you will rarely find wheat growing on a beach!

How to find multiple concave & convex shapes in an image

The image attached is a mask of "walkable space" for a game, which is painted by the player, and so could be anything. I need to create colliders that prevent the player from walking on to the blue parts of the mask (water). The game itself is in 3D space, the mask is for the terrain textures (Unreal Engine 4).
What I've done at the moment is reduce the size of the texture from 2048x2048 to 256x256 and I create a collider in 3D space for each blue pixel in the mask. This works ok with small amounts of blue pixels, but it's not going to work well/at all, if there's a lot of blue pixels (water). There'd be too many colliders to spawn on the fly.
And so I guess the only other option is to find points that make up the boundaries of any number concave shapes in the image. With which I will create wall colliders.
Hope that makes sense. Any help is very much appreciated.
Thanks
After you have reduced the size to something smaller, fill a bool array with zeroes and ones, ones where there is blue, and zeroes, where there isn't. From there you can turn all ones with no zero neighbours to zeros. That is because if a cell has no empty neighbours and it isn't empty itself, no object could collide with it and you don't need to check. That should vastly improve performance, but if you need more, you can then find all straight lines of filled cells and check for collisions with those. So it would look something like this:
In this case you end up having to check collisions with 6 objects instead of with 18 and the difference gets greater as the blobs get bigger.

Snake/Worm movement on 2D envrimoment

That's my goal:
I have multiple sprites that makes the personage. The first sprite (on an array of n sprites, where n is the number of segments) is the head and the other sprites must follow that.
So, If the head change his angle and then moves (in the direction where the head is pointing), the other sprites must follow the head, like a snake/worm does.
I'm doing this with cocos2D but is not that relevant because I think that I don't have the concept, because I know cocos2D, but is not this the problem (not the framework).
So, how can I do it? How can the other sprites follow perfectly the head? Game examples are Death Worm or Super Mega Worm on App Store
If needed, I can post the code that I'm using (works bad) and an image of the result, but I don't know if this is needed.
Thanks.
I would simply store all the positions of each section in an arraylist / vector. on Draw have the next head position calculated based on game play then remove the oldest (pos==0) item in the list and draw the sprites at the positions in the list.
I'm sorry if this seems similar to Alexey's answer, I just think this might be a simpler implementation.
You will need to manually set the positions of every sprite that makes the personage. The new position has to be calculated and set inside the scheduled method that gets called repeatedly.
sprite.position = ccp(newX,newY);
You may refer this too: Cocos2D set sprite position in relation to another sprite
Have an array of coordinates for every segment of the critter, including the head.
If it needs to move straight (not changing the direction), you shift the array values so the tail (last) segment gets the coordinates of the adjacent segment, that one gets the coordinates of the one before it and so on and for the head segment you set the new head coordinates, which are equal to old coordinates incremented by the direction vector d:
dx dy direction
0 -1 up
0 +1 down
-1 0 left
+1 0 right
If you want to change the direction, change the direction vector appropriately and then do the same array shift and head coordinate update.

How to get a 1 pixel line with NSBezierPath?

I'm developing a custom control. One of the requirements is to draw lines. Although this works, I noticed that my 1 pixel wide lines do not really look like 1 pixel wide lines - I know, they're not really pixels but you know what I mean. They look more like two or three pixels wide. This becomes very apparent when I draw a dashed line with a 1 pixel dash and a 2 pixel gap. The 1 pixel dashes actually look like tiny lines in stead of dots.
I've read the Cocoa Drawing documentation and although Apple mentions the setLineWidth method, changing the line width to values smaller than 1.0 will only make the line look more vague and not thinner.
So, I suspect there's something else influencing the way my lines look.
Any ideas?
Bezier paths are drawn centered on their path, so if you draw a 1 pixel wide path along the X-coordinate, the line actually draws along Y-coordinates { -0.5, 0.5 } The solution is usually to offset the coordinate by 0.5 so that the line is not drawn in the sub pixel boundaries. You should be able to shift your bounding box by 0.5 to get sharper drawing behavior.
Francis McGrew already gave the right answer, but since I did a presentation on this once, I thought I'd add some pictures.
The problem here is that coordinates in Quartz lie at the intersections between pixels. This is fine when filling a rectangle, because every pixel that lies inside the coordinates gets filled. But lines are technically (mathematically!) invisible. To draw them, Quartz has to actually draw a rectangle with the given line width. This rectangle is centered over the coordinates:
So when you ask Quartz to stroke a rectangle with integral coordinates, it has the problem that it can only draw whole pixels. But here you see that we have half pixels. So what it does is it averages the color. For a 50% black (the line color) and 50% white (the background) line, it simply draws each pixel in grey:
This is where your washed-out drawings come from. The fix is now obvious: Don't draw between pixels, and you achieve that by moving your points by half a pixel, so your coordinate is centered over the desired pixel:
Now of course just offsetting may not be what you wanted. Because if you compare the filled variant to the stroked one, the stroke is one pixel larger towards the lower right. If you're e.g. clipping to the rectangle, this will cut off the lower right:
Since people usually expect the rectangle to stroke inside the specified rectangle, what you usually do is that you offset by 0.5 towards the center, so the lower right effectively moves up one pixel. Alternately, many drawing apps offset by 0.5 away from the center, to avoid overlap between the border and the fill (which can look odd when you're drawing with transparency).
Note that this only holds true for 1x screens. 2x Retina screens actually exhibit this problem differently, because each of the pixels below is actually drawn by 4 Retina pixels, which means they can actually draw the half-pixels. However, you still have the same problem if you want a sharp 0.5pt line. Also, since Apple may in the future introduce other Retina screens where e.g. every pixel is made up of 9 Retina pixels (3x), or whatever, you should really not rely on this. Instead, there are now API calls to convert rectangles to "backing aligned", which does this for you, no matter whether you're running 1x, 2x, or a fictitious 3x.
PS - Since I went to the hassle of writing this all up, I've put this up on my web site: http://orangejuiceliberationfront.com/are-your-rectangles-blurry-pale-and-have-rounded-corners/ where I'll update and revise this description and add more images.
The answer is (buried) in the Apple Docs:
"To avoid antialiasing when you draw a one-point-wide horizontal or vertical line, if the line is an odd number of pixels in width, you must offset the position by 0.5 points to either side of a whole-numbered position"
Hidden in Drawing and Printing Guide for iOS: iOS Drawing Concepts, though nothing that specific to be found in the current, standard (OS X) Cocoa Drawing Guide..
As for the effects of invoking setDefaultLineWidth: the docs also state that:
"A width of 0 is interpreted as the thinnest line that can be rendered on a particular device. The actual rendered line width may vary from the specified width by as much as 2 device pixels, depending on the position of the line with respect to the pixel grid and the current anti-aliasing settings. The width of the line may also be affected by scaling factors specified in the current transformation matrix of the active graphics context."
I found some info suggesting that this is caused by anti aliasing. Turning anti aliasing off temporarily is easy:
[[NSGraphicsContext currentContext] setShouldAntialias: NO];
This gives a crisp, 1 pixel line. After drawing just switch it on again.
I tried the solution suggested by Francis McGrew by offsetting the x coordinate with 0.5, however that did not make any difference to the appearance of my line.
EDIT:
To be more specific, I changed x and y coordinates individually and together with an offset of 0.5.
EDIT 2:
I must have done something wrong, as changing the coordinates with an offset of 0.5 actually does work. The end result is better than the one obtained by switching off the anti aliasing so I'll make Francis MsGrew's answer the accepted answer.

Resources