Best approach for "game" problematic - windows-phone-7

I have to tell you, I'm completely NEW to XNA, and I know NOTHING about vertex, multisampling, etc etc..
However, I love so much programming and windows phone that I wanted to start an immense challenge... create a XNA game! :D
ok, let's stop the story and start explaining..
I'm making a big game... which I've worked for it for almost 1 month, now it's almost over..
I've just a big issue, which is the central point of the game... think about it as a 1024x800 puzzle, each point of puzzle can be clicked, and when it's clicked it must change color..
so we have 2 hard point to do,
1) UNDERSTAND WHICH PIECE I'VE CLICKED
2) COLOR THIS PIECE
I thought about 2 approaches
FIRST APPROACH
1 PNG big puzzle background 1024x800
N PNG for each puzzle piece, with transparent layer around the piece ( each piece is 1024x800, in the correct position )
by merging the N+1 PNGS we have the complete puzzle, now, it's REALLY easy to understand which piece I've clicked, because I just have to cycle the N textures, and when I got the one which havent the transparent pixel in the point clicked, I've the piece!
then, for color it, I've just to color the texture in the draw part.
it's easy, the problem is that if I have to drag, zoom, 50*4 pngs, it's really slow :(
SECOND APPROACH
1 PNG big puzzle 1024x800 with all pieces merged, EACH piece will have a different color fill, example, 1) 250,250,250 2) 245,245,245 etc
After the PNG has been loaded, I calculate the pixel indexes by using GetData for each piece and store it in an array for each piece
for getting the piece selected, it's easy.. I've just to calculate the y*width+x and get the piece with that value on the array.
problem is that when I've to color.. I've to iterate the array and change the RGB and then finally do a SetData.. it takes 1 second to colorate that piece.. it MAY be acceptable.. but I want better ...
second approach is way MUCH MUCH MUCH faster when dragging and zooming, because it's only 1 PNG, and I can use bigger resolutions thanks to this, this far way best approach
any suggestions??
Thanks,
Luca

So you have a 1024x800 puzzle, with each pixel being a piece? You are not going to want to make 819200 image files or define 819200 individual behaviors for each pixel.
You need to (well, should) use object-oriented concepts for this. Here is a basic idea of the path I think you should take:
Define a "Piece" class. This Piece represents a game piece.
In the Piece class, define a rectangle which represents this piece on the board. If your piece is an irregular shape (like an S tetronimo, etc), define multiple rectangles. If you have multiple different types of pieces (and different shapes), you will want to use inheritance and make each different type of piece inherit from the Piece class, but define its own rectangles. However, if your 'pieces' are just pixels on the screen, just make a 1x1 Rectangle.
Fill the rectangles with a color using SpriteBatch. You can use a single white pixel png and stretch it to fill the rectangle(s) as well as specify the color.
Use the rectangles to hit-test your pieces with touch. If a touch falls within your rectangle, you have selected that piece. If your pieces are only 1 pixel in size, it's even easier: simply lookup the piece at the touch position's X and Y values.
Now for the game board: depending on the shape and possible positions of the pieces, you will need to select a way to store them. If all your pieces were squares, you could store them in a 2D array. Otherwise you might use a map, where the key is the piece's position and the value is the piece itself.
A basic summary of this approach: don't try to store your board in a single texture. Writing/reading texture data is slow and textures are not made to act as data structures. Instead, store your pieces as individual objects. Iterate through each piece and draw it to the screen.

Related

anyway to remove algorithmically discolorations from aerial imagery

I don't know much about image processing so please bear with me if this is not possible to implement.
I have several sets of aerial images of the same area originating from different sources. The pictures have been taken during different seasons, under different lighting conditions etc. Unfortunately some images look patchy and suffer from discolorations or are partially obstructed by clouds or pix-elated, as par example picture1 and picture2
I would like to take as an input several images of the same area and (by some kind of averaging them) produce 1 picture of improved quality. I know some C/C++ so I could use some image processing library.
Can anybody propose any image processing algorithm to achieve it or knows any research done in this field?
I would try with a "color twist" transform, i.e. a 3x3 matrix applied to the RGB components. To implement it, you need to pick color samples in areas that are split by a border, on both sides. You should fing three significantly different reference colors (hence six samples). This will allow you to write the nine linear equations to determine the matrix coefficients.
Then you will correct the altered areas by means of this color twist. As the geometry of these areas is intertwined with the field patches, I don't see a better way than contouring the regions by hand.
In the case of the second picture, the limits of the regions are blurred so that you will need to blur the region mask as well and perform blending.
In any case, don't expect a perfect repair of those problems as the transform might be nonlinear, and completely erasing the edges will be difficult. I also think that colors are so washed out at places that restoring them might create ugly artifacts.
For the sake of illustration, a quick attempt with PhotoShop using manual HLS adjustment (less powerful than color twist).
The first thing I thought of was a kernel matrix of sorts.
Do a first pass of the photo and use an edge detection algorithm to determine the borders between the photos - this should be fairly trivial, however you will need to eliminate any overlap/fading (looks like there's a bit in picture 2), you'll see why in a minute.
Do a second pass right along each border you've detected, and assume that the pixel on either side of the border should be the same color. Determine the difference between the red, green and blue values and average them along the entire length of the line, then divide it by two. The image with the lower red, green or blue value gets this new value added. The one with the higher red, green or blue value gets this value subtracted.
On either side of this line, every pixel should now be the exact same. You can remove one of these rows if you'd like, but if the lines don't run the length of the image this could cause size issues, and the line will likely not be very noticeable.
This could be made far more complicated by generating a filter by passing along this line - I'll leave that to you.
The issue with this could be where there was development/ fall colors etc, this might mess with your algorithm, but there's only one way to find out!

Is it possible to import a Collada model that aligns to pixels?

Assume I have a model that is simply a cube. (It is more complicated than a cube, but for the purposes of this discussion, we will simplify.)
So when I am in Sketchup, the cube is Xmm by Xmm by Xmm, where X is an integer. I then export the a Collada file and subsequently load that into threejs.
Now if I look at the geometry bounding box, the values are floats, not integers.
So now assume I am putting cubes next to each other with a small space in between say 1 pixel. Because screens can't draw half pixels, sometimes I see one pixel and sometimes I see two, which causes a lack of uniformity.
I think I can resolve this satisfactorily if I can somehow get the imported model to have integer dimensions. I have full access to all parts of the model starting with Sketchup, so any point in the process is fair game.
Is it possible?
Thanks.
Clarification: My app will have two views. The view that this is concerned with is using an OrthographicCamera that is looking straight down on the pieces, so this is really a 2D view. For purposes of this question, after importing the model, it should look like a grid of squares with uniform spacing in between.
UPDATE: I would ask that you please not respond unless you can provide an actual answer. If I need help finding a way to accomplish something, I will post a new question. For this question, I am only interested in knowing if it is possible to align an imported Collada model to full pixels and if so how. At this point, this is mostly to serve my curiosity and increase my knowledge of what is and isn't possible. Thank you community for your kind help.
Now you have to learn this thing about 3D programming: numbers don't mean anything :)
In the real world 1mm, 2.13cm and 100Kg specify something that can be measured and reproduced. But for a drawing library, those numbers don't mean anything.
In a drawing library, 3D points are always represented with 3 float values.You submit your points to the library, it transforms them in 2D points (they must be viewed on a 2D surface), and finally these 2D points are passed to a rasterizer which translates floating point values into integer values (the screen has a resolution of NxM pixels, both N and M being integers) and colors the actual pixels.
Your problem simply is not a problem. A cube of 1mm really means nothing, because if you are designing an astronomic application, that object will never be seen, but if it's a microscopic one, it will even be way larger than the screen. What matters are the coordinates of the point, and the scale of the overall application.
Now back to your cubes, don't try to insert 1px in between two adjacent ones. Your cubes are defined in terms of mm, so try to choose the distance in mm appropriate to your world, and let the rasterizer do its job and translate them to pixels.
I have been informed by two co-workers that I tracked down that this is indeed impossible using normal means.

Rendering very large image from scratch then splitting it into tiles (for Google Maps)

I have a large set of google maps api v3 polylines and markers that need to be rendered as transparent PNG's (implemented as ImageMapType). I've done all the math/geometry regarding transformations from latLng to pixel and tile coordinates.
The problem is: at the maximum allowable zoom for my app, that is 18, the compound image would span at least 80000 pixels both in width and height. So rendering it in one piece, then splitting it into tiles becomes impossible.
I tried the method of splitting polylines beforehand and placing the parts into tiles, then rendering each tile alone, which up until now works almost fine. But it will become very difficult when I will need to draw stylized markers / text and other fancy stuff, etc.
So far I used C# GDI+ as the drawing methods (the ol' Bitmap / Graphics pair).
Many questions here are about splitting an already existing image, storing, and linking it to the API. I already know how to do that.
My problem is how do I draw the initial very large image then split it up? It doesn't really need to be a true image/bitmap/call it whatever you want solution. A friend suggested me to use SVG but I don't know any good rendering solutions to suit my needs.
To make it a little easier to comprehend, think it in terms of input/output. My input is the data that I need to draw (lines, circles, text, etc) that spreads across tens of thousands of pixels, and the output must be the tiles. I really don't care what the 'magic box' is, and I don't even care what the platform is.
I ran into the same problem when creating custom tiles, and you are on the right track with your solution of creating one tile at a time. You just need to add some strategy to the process. What I do is like this:
Pseudo code:
for each tile {
- determine the lat/lon corners of the tile.
- query the database and load the objects that are within this tile.
for each object{
- calculate the tile pixels on which the object should be painted. [*A*]
- draw the object on the tile.
- Save the tile. (you're done with this tile).
}
}
alternatively:
Pseudo code:
- for each object to be drawn {
- determine what tile the object should be painted on.
- calculate the tile pixels on which the object should be painted.[*A*]
- get that tile, if it doesn't yet exist create a new one.
- draw the object on the tile.
- Save the tile. (you might need to draw more on this tile later)
}
I do this with Perl and the GD library.
[*A*] When painting objects that span more than one tile, if the object begins on the current tile then part of it will be left out automatically because you'll be attempting to paint outside the tile, while if the object began on the previous tile and you're drawing the second part then the pixel numbers should be negative, meaning that it began on the neighbor tile.
This is a bit hard to explain in a written post so please feel free to ask for further clarification if you need it and I'll edit the answer.
I'd recommend getting to know GDAL (http://gdal.org) and it's libraries. It has libraries for rasterization, tiling, data conversion, projections, warping, and much more.

matching jigsaw puzzle pieces

I have nothing useful to do and was playing with jigsaw puzzle like this:
alt text http://manual.gimp.org/nl/images/filters/examples/render-taj-jigsaw.jpg
and I was wondering if it'd be possible to make a program that assists me in putting it together.
Imagine that I have a small puzzle, like 4x3 pieces, but the little tabs and blanks are non-uniform - different pieces have these tabs in different height, of different shape, of different size. What I'd do is to take pictures of all of these pieces, let a program analyze them and store their attributes somewhere. Then, when I pick up a piece, I could ask the program to tell me which pieces should be its 'neighbours' - or if I have to fill in a blank, it'd tell me how does the wanted puzzle piece(s) look.
Unfortunately I've never did anything with image processing and pattern recognition, so I'd like to ask you for some pointers - how do I recognize a jigsaw piece (basically a square with tabs and holes) in a picture?
Then I'd probably need to rotate it so it's in the right position, scale to some proportion and then measure tab/blank on each side, and also each side's slope, if present.
I know that it would be too time consuming to scan/photograph 1000 pieces of puzzle and use it, this would be just a pet project where I'd learn something new.
Data acquisition
(This is known as Chroma Key, Blue Screen or Background Color method)
Find a well-lit room, with the least lighting variation across the room.
Find a color (hue) that is rarely used in the entire puzzle / picture.
Get a color paper that has that exactly same color.
Place as many puzzle pieces on the color paper as it'll fit.
You can categorize the puzzles into batches and use it as a computer hint later on.
Make sure the pieces do not overlap or touch each other.
Do not worry about orientation yet.
Take picture and download to computer.
Color calibration may be needed because the Chroma Key background may have upset the built-in color balance of the digital camera.
Acquisition data processing
Get some computer vision software
OpenCV, MATLAB, C++, Java, Python Imaging Library, etc.
Perform connected-component on the chroma key color on the image.
Ask for the contours of the holes of the connected component, which are the puzzle pieces.
Fix errors in the detected list.
Choose the indexing vocabulary (cf. Ira Baxter's post) and measure the pieces.
If the pieces are rectangular, find the corners first.
If the pieces are silghtly-off quadrilateral, the side lengths (measured corner to corner) is also a valuable signature.
Search for "Shape Context" on SO or Google or here.
Finally, get the color histogram of the piece, so that you can query pieces by color later.
To make them searchable, put them in a database, so that you can query pieces with any combinations of indexing vocabulary.
A step back to the problem itself. The problem of building a puzzle can be easy (P) or hard (NP), depending of whether the pieces fit only one neighbour, or many. If there is only one fit for each edge, then you just find, for each piece/side its neighbour and you're done (O(#pieces*#sides)). If some pieces allow multiple fits into different neighbours, then, in order to complete the whole puzzle, you may need backtracking (because you made a wrong choice and you get stuck).
However, the first problem to solve is how to represent pieces. If you want to represent arbitrary shapes, then you can probably use transparency or masks to represent which areas of a tile are actually part of the piece. If you use square shapes then the problem may be easier. In the latter case, you can consider the last row of pixels on each side of the square and match it with the most similar row of pixels that you find across all other pieces.
You can use the second approach to actually help you solve a real puzzle, despite the fact that you use square tiles. Real puzzles are normally built upon a NxM grid of pieces. When scanning the image from the box, you split it into the same NxM grid of square tiles, and get the system to solve that. The problem is then to visually map the actual squiggly piece that you hold in your hand with a tile inside the system (when they are small and uniformly coloured). But you get the same problem if you represent arbitrary shapes internally.

Raytracing (LoS) on 3D hex-like tile maps

Greetings,
I'm working on a game project that uses a 3D variant of hexagonal tile maps. Tiles are actually cubes, not hexes, but are laid out just like hexes (because a square can be turned to a cube to extrapolate from 2D to 3D, but there is no 3D version of a hex). Rather than a verbose description, here goes an example of a 4x4x4 map:
(I have highlighted an arbitrary tile (green) and its adjacent tiles (yellow) to help describe how the whole thing is supposed to work; but the adjacency functions are not the issue, that's already solved.)
I have a struct type to represent tiles, and maps are represented as a 3D array of tiles (wrapped in a Map class to add some utility methods, but that's not very relevant).
Each tile is supposed to represent a perfectly cubic space, and they are all exactly the same size. Also, the offset between adjacent "rows" is exactly half the size of a tile.
That's enough context; my question is:
Given the coordinates of two points A and B, how can I generate a list of the tiles (or, rather, their coordinates) that a straight line between A and B would cross?
That would later be used for a variety of purposes, such as determining Line-of-sight, charge path legality, and so on.
BTW, this may be useful: my maps use the (0,0,0) as a reference position. The 'jagging' of the map can be defined as offsetting each tile ((y+z) mod 2) * tileSize/2.0 to the right from the position it'd have on a "sane" cartesian system. For the non-jagged rows, that yields 0; for rows where (y+z) mod 2 is 1, it yields 0.5 tiles.
I'm working on C#4 targeting the .Net Framework 4.0; but I don't really need specific code, just the algorithm to solve the weird geometric/mathematical problem. I have been trying for several days to solve this at no avail; and trying to draw the whole thing on paper to "visualize" it didn't help either :( .
Thanks in advance for any answer
Until one of the clever SOers turns up, here's my dumb solution. I'll explain it in 2D 'cos that makes it easier to explain, but it will generalise to 3D easily enough. I think any attempt to try to work this entirely in cell index space is doomed to failure (though I'll admit it's just what I think and I look forward to being proved wrong).
So you need to define a function to map from cartesian coordinates to cell indices. This is straightforward, if a little tricky. First, decide whether point(0,0) is the bottom left corner of cell(0,0) or the centre, or some other point. Since it makes the explanations easier, I'll go with bottom-left corner. Observe that any point(x,floor(y)==0) maps to cell(floor(x),0). Indeed, any point(x,even(floor(y))) maps to cell(floor(x),floor(y)).
Here, I invent the boolean function even which returns True if its argument is an even integer. I'll use odd next: any point point(x,odd(floor(y)) maps to cell(floor(x-0.5),floor(y)).
Now you have the basics of the recipe for determining lines-of-sight.
You will also need a function to map from cell(m,n) back to a point in cartesian space. That should be straightforward once you have decided where the origin lies.
Now, unless I've misplaced some brackets, I think you are on your way. You'll need to:
decide where in cell(0,0) you position point(0,0); and adjust the function accordingly;
decide where points along the cell boundaries fall; and
generalise this into 3 dimensions.
Depending on the size of the playing field you could store the cartesian coordinates of the cell boundaries in a lookup table (or other data structure), which would probably speed things up.
Perhaps you can avoid all the complex math if you look at your problem in another way:
I see that you only shift your blocks (alternating) along the first axis by half the blocksize. If you split up your blocks along this axis the above example will become (with shifts) an (9x4x4) simple cartesian coordinate system with regular stacked blocks. Now doing the raytracing becomes much more simple and less error prone.

Resources