I'm developing a simple game, where user can place different but modular objects (for instance: tracks, road etc).
My question is: how to match and place different object when placed one near the other ?
My first approach is to create an hidden child object (a box) for each module objects, and put it in the border where is possible to place other object (see my image example), so i can use that coordinates (x,y,z) to align other object.
But i don't know if the best approach.
Thanks
Summary:
1.Define what is a "snapping point"
2.Define which is your threshold
3.Update new game object position
Little Explanation
1.
So I suppose that you need a way to define which parts of the object are the "snapping points".
Cause they can be clear in some examples, like a Cube, where the whole vertex could be snapping points, but it's hard to define that every vertex in amorphous objects.
A simple solution could be the one exposed by #PierreBaret, whic consists in define on your transform component which are the "snapping points".
The other one is the one you propouse, creating empty game objects that will act as snapping points locations on the game object.
2.After having those snaped points, when you will drop your new gameObject, you need to define a threshold, as long as you don't want that every object snaps allways to the nearest game object.
3.So you define a minimum distance between snapping points, so if your snapping point is under that threshold, you will need to update it's position, to adjust to the the snapped point.
Visual Representation:
Note: The Threshold distance is showing just ONE of the 4 current threshold checks on the 4 vertex in the square, but this dark blue circle should be repilcate 3 more times, one for each green snapping point of the red square
Of course this method seems expensive, you can make some improvements like setting a first threshold between gameobjects, and if the gameObject is inside this threshold, then check snapping threshold distance.
Hope it helps!
Approach for arbitrary objects/models and deformable models.
[A] A physical approach would consider all the surfaces of the 2 objects, and you might need to check that objects don't overlap, using dot products between surfaces. That's a bit more expensive computing, but nothing nasty. If there is no match involved here, you'll be able to add matching features (see [B]). However, that's the only way to work with non predefined models or deformable models.
Approaches for matching simple and complex models
[B] Snapping points are a good thing but it's not sufficient alone. I think you need to make an object have:
a sparse representation (eg., complex oriented sphere to a cube),
and place key snapping points,
tagged by polarity or color, and eventually orientation (that's oriented snapping points); eg., in the case of rails, you'll want rails to snap {+} with {+} and forbid {+} with {-}. In the case of a more complex object, or when you have several orientations (eg., 2 faces of a surface, but only one is candidate for an pair of objects matching) you'll need more than 2 polarities, but 3 different ones per matching candidate surface or feature therefore the colors (or any enumeration). You need 3 different colors to make sure there is a unique 3D space configuration. You create something that is called in chemistry an enantiomer.
You can also use point pair features that describes the relative
position and orientation of two oriented points, when an oriented
surface is not appropriate.
References
Some are computer vision papers or book extracts, but they expose algorithms and concepts to achieve what I developed in my answer.
Model Globally, Match Locally: Efficient and Robust 3D Object Recognition, Drost et al.
3D Models and Matching
Related
The H3 library uses a Dymaxion orientation, which means that the hexagon grid is rotated to an unusual angle relative to the equator/meridian lines. This makes sense when modelling the Earth, as the twelve pentagons then all lie in the water, but would be unnecessary when using the library to map other spheres (like the sky or other planets). In this case it would be more intuitive and aesthetically pleasing to align the icosahedron to put a pentagon at the poles and along the meridian. I'm just trying to work out what I would need to change in the library to achieve that? It looks like I would need to recalculate the faceCenterGeo and faceCenterPoint tables in faceijk.c, but do I need to recalculate faceAxesAzRadsCII as well? I don't really understand what that latter table is...
Per this related answer, the main changes you'd need for other planets are to change the radius of the sphere (only necessary if you want to calculate distances or areas) and, as you ask, the orientation of the icosahedron. For the latter:
faceCenterGeo defines the icosahedron orientation in lat/lng points
faceCenterPoint is a table derived from faceCenterGeo that defines the center of each face as 3d coords on a unit sphere. You could create your own derivation using generateFaceCenterPoint.c
faceAxesAzRadsCII is a table derived from faceCenterGeo that defines the angle from each face center to each of its three vertices. This does not have a generation script, and TBH I don't know how it was originally generated. It's used in the core algorithms translating between grid coordinates and geo coordinates, however, so you'd definitely need to update it.
I'd strongly suggest that taking this approach is a Bad Idea:
It's a fair amount of work - not (just) the calculations, but recompiling the code, maintaining a fork, possibly writing bindings in other languages for your fork, etc.
You'd break most tests involving geo input or output, so you'd be flying blind as to whether your updated code is working as expected.
You wouldn't be able to take advantage of other projects built on H3, e.g. bindings for other languages and databases.
If you want to re-orient the geometry for H3, I'd suggest doing exactly that - apply a transform to the input geo coordinates you send to H3, and a reverse transform to the output geo coordinates you get from H3. This has a bunch of advantages over modifying the library code:
It's a lot easier
You could continue to use the maintained library
You could apply these transformations outside of the bindings, in the language of your choice
Your own code is well-separated from 3rd-party library code
There's probably a very small performance penalty to this approach, but in almost all cases that's a tiny price to pay compared to the difficulties you avoid.
if i do a human model and import him to game engine. does game engine knows all point cordinates on model and rotates each ones? all models consists million points and and if i rotate a model 90 degree , does game engine calculates millions point new location and rotate? how does it works. Thanks
This is a bit of a vague question since each game engine will work differently, but in general the game engine will not touch the model coordinates.
Models are usually loaded with model space (or local space) coordinates - this simply means that each vertex is defined with a location relative to the origin of that model. The origin is defined as (0,0,0) and is the point around which rotations take place.
Now the game engine loads and keeps the model in this coordinate space. Then you provide your transformations (such as translation and rotation matrices) to place that model somewhere in your "world" (i.e. the global coordinate space shared by all objects). You also provide the way you want to view this world with various other transforms such projection and view matrices.
The game engine then takes all of these transformations and passes them to the GPU (or software renderer, in some cases) - it will also setup other stuff such as textures, etc. These are usually set once per frame (or per object for a frame).
Finally, it then passes each vertex that needs to be processed to the renderer. Each vertex is then transformed by the renderer using all the transformations specified to get a final vertex position - first in world space and then in screen space - which it can use to render pixels based on various other information (such as textures and lighting).
So the point is, in most cases, the engine really has nothing to do with the rotation of the model/vertices. It is simply a way to manage the model and the various settings that apply to it.
Of course, the engine can rotate the model and modify it's vertices, but this is usually only done during loading - for example if the model needs to be converted between different coordinate spaces.
There is a lot more going on, and this is a very basic description of what actually happens. There are many many sources that describe this process in great detail, so I won't even try to duplicate it. Hopefully this gives you enough detail to understand the basics.
I'm searching for an certain object in my photograph:
Object: Outline of a rectangle with an X in the middle. It looks like a rectangular checkbox. That's all. So, no fill, just lines. The rectangle will have the same ratios of length to width but it could be any size or any rotation in the photograph.
I've looked a whole bunch of image recognition approaches. But I'm trying to determine the best for this specific task. Most importantly, the object is made of lines and is not a filled shape. Also, there is no perspective distortion, so the rectangular object will always have right angles in the photograph.
Any ideas? I'm hoping for something that I can implement fairly easily.
Thanks all.
You could try using a corner detector (e.g. Harris) to find the corners of the box, the ends and the intersection of the X. That simplifies the problem to finding points in the right configuration.
Edit (response to comment):
I'm assuming you can find the corner points in your image, the 4 corners of the rectangle, the 4 line endings of the X and the center of the X, plus a few other corners in the image due to noise or objects in the background. That simplifies the problem to finding a set of 9 points in the right configuration, out of a given set of points.
My first try would be to look at each corner point A. Then I'd iterate over the points B close to A. Now if I assume that (e.g.) A is the upper left corner of the rectangle and B is the lower right corner, I can easily calculate, where I would expect the other corner points to be in the image. I'd use some nearest-neighbor search (or a library like FLANN) to see if there are corners where I'd expect them. If I can find a set of points that matches these expected positions, I know where the symbol would be, if it is present in the image.
You have to try if that is good enough for your application. If you have too many false positives (sets of corners of other objects that accidentially form a rectangle + X), you could check if there are lines (i.e. high contrast in the right direction) where you would expect them. And you could check if there is low contrast where there are no lines in the pattern. This should be relatively straightforward once you know the points in the image that correspond to the corners/line endings in the object you're looking for.
I'd suggest the Generalized Hough Transform. It seems you have a fairly simple, fixed shape. The generalized Hough transform should be able to detect that shape at any rotation or scale in the image. You many need to threshold the original image, or pre-process it in some way for this method to be useful though.
You can use local features to identify the object in image. Feature detection wiki
For example, you can calculate features on some referent image which contains only the object you're looking for and save the results, let's say, to a plain text file. After that you can search for the object just by comparing newly calculated features (on images with some complex scenes containing the object) with the referent ones.
Here's some good resource on local features:
Local Invariant Feature Detectors: A Survey
Greetings,
I'm working on a game project that uses a 3D variant of hexagonal tile maps. Tiles are actually cubes, not hexes, but are laid out just like hexes (because a square can be turned to a cube to extrapolate from 2D to 3D, but there is no 3D version of a hex). Rather than a verbose description, here goes an example of a 4x4x4 map:
(I have highlighted an arbitrary tile (green) and its adjacent tiles (yellow) to help describe how the whole thing is supposed to work; but the adjacency functions are not the issue, that's already solved.)
I have a struct type to represent tiles, and maps are represented as a 3D array of tiles (wrapped in a Map class to add some utility methods, but that's not very relevant).
Each tile is supposed to represent a perfectly cubic space, and they are all exactly the same size. Also, the offset between adjacent "rows" is exactly half the size of a tile.
That's enough context; my question is:
Given the coordinates of two points A and B, how can I generate a list of the tiles (or, rather, their coordinates) that a straight line between A and B would cross?
That would later be used for a variety of purposes, such as determining Line-of-sight, charge path legality, and so on.
BTW, this may be useful: my maps use the (0,0,0) as a reference position. The 'jagging' of the map can be defined as offsetting each tile ((y+z) mod 2) * tileSize/2.0 to the right from the position it'd have on a "sane" cartesian system. For the non-jagged rows, that yields 0; for rows where (y+z) mod 2 is 1, it yields 0.5 tiles.
I'm working on C#4 targeting the .Net Framework 4.0; but I don't really need specific code, just the algorithm to solve the weird geometric/mathematical problem. I have been trying for several days to solve this at no avail; and trying to draw the whole thing on paper to "visualize" it didn't help either :( .
Thanks in advance for any answer
Until one of the clever SOers turns up, here's my dumb solution. I'll explain it in 2D 'cos that makes it easier to explain, but it will generalise to 3D easily enough. I think any attempt to try to work this entirely in cell index space is doomed to failure (though I'll admit it's just what I think and I look forward to being proved wrong).
So you need to define a function to map from cartesian coordinates to cell indices. This is straightforward, if a little tricky. First, decide whether point(0,0) is the bottom left corner of cell(0,0) or the centre, or some other point. Since it makes the explanations easier, I'll go with bottom-left corner. Observe that any point(x,floor(y)==0) maps to cell(floor(x),0). Indeed, any point(x,even(floor(y))) maps to cell(floor(x),floor(y)).
Here, I invent the boolean function even which returns True if its argument is an even integer. I'll use odd next: any point point(x,odd(floor(y)) maps to cell(floor(x-0.5),floor(y)).
Now you have the basics of the recipe for determining lines-of-sight.
You will also need a function to map from cell(m,n) back to a point in cartesian space. That should be straightforward once you have decided where the origin lies.
Now, unless I've misplaced some brackets, I think you are on your way. You'll need to:
decide where in cell(0,0) you position point(0,0); and adjust the function accordingly;
decide where points along the cell boundaries fall; and
generalise this into 3 dimensions.
Depending on the size of the playing field you could store the cartesian coordinates of the cell boundaries in a lookup table (or other data structure), which would probably speed things up.
Perhaps you can avoid all the complex math if you look at your problem in another way:
I see that you only shift your blocks (alternating) along the first axis by half the blocksize. If you split up your blocks along this axis the above example will become (with shifts) an (9x4x4) simple cartesian coordinate system with regular stacked blocks. Now doing the raytracing becomes much more simple and less error prone.
I'v always wondered this. In a game like GTA where there are 10s of thousands of objects, how does the game know as soon as you're on a health pack?
There can't possibly be an event listener for each object? Iterating isn't good either? I'm just wondering how it's actually done.
There's no one answer to this but large worlds are often space-partitioned by using something along the lines of a quadtree or kd-tree which brings search times for finding nearest neighbors below linear time (fractional power, or at worst O( N^(2/3) ) for a 3D game). These methods are often referred to as BSP for binary space partitioning.
With regards to collision detection, each object also generally has a bounding volume mesh (set of polygons forming a convex hull) associated with it. These highly simplified meshes (sometimes just a cube) aren't drawn but are used in the detection of collisions. The most rudimentary method is to create a plane that is perpendicular to the line connecting the midpoints of each object with the plane intersecting the line at the line's midpoint. If an object's bounding volume has points on both sides of this plane, it is a collision (you only need to test one of the two bounding volumes against the plane). Another method is the enhanced GJK distance algorithm. If you want a tutorial to dive through, check out NeHe Productions' OpenGL lesson #30.
Incidently, bounding volumes can also be used for other optimizations such as what are called occlusion queries. This is a process of determining which objects are behind other objects (occluders) and therefore do not need to be processed / rendered. Bounding volumes can also be used for frustum culling which is the process of determining which objects are outside of the perspective viewing volume (too near, too far, or beyond your field-of-view angle) and therefore do not need to be rendered.
As Kylotan noted, using a bounding volume can generate false positives when detecting occlusion and simply does not work at all for some types of objects such as toroids (e.g. looking through the hole in a donut). Having objects like these occlude correctly is a whole other thread on portal-culling.
Quadtrees and Octrees, another quadtree, are popular ways, using space partitioning, to accomplish this. The later example shows a 97% reduction in processing over a pair-by-pair brute-force search for collisions.
A common technique in game physics engines is the sweep-and-prune method. This is explained in David Baraff's SIGGRAPH notes (see Motion with Constraints chapter). Havok definitely uses this, I think it's an option in Bullet, but I'm not sure about PhysX.
The idea is that you can look at the overlaps of AABBs (axis-aligned bounding boxes) on each axis; if the projection of two objects' AABBs overlap on all three axes, then the AABBs must overlap. You can check each axis relatively quickly by sorting the start and end points of the AABBs; there's a lot of temporal coherence between frames since usually most objects aren't moving very fast, so the sorting doesn't change much.
Once sweep-and-prune detects an overlap between AABBs, you can do the more detailed check for the objects, e.g. sphere vs. box. If the detailed check reveals a collision, you can then resolve the collision by applying forces, and/or trigger a game event or play a sound effect.
Correct. Normally there is not an event listener for each object. Often there is a non-binary tree structure in memory that mimics your games map. Imagine a metro/underground map.
This memory strucutre is a collection of things in the game. You the player, monsters and items that you can pickup or items that might blowup and do you harm. So as the player moves around the game the player object pointer is moved in the game/map memory structure.
see How should I have my game entities knowledgeable of the things around them?
I would like to recommend the solid book of Christer Ericson on real time collision detection. It presents the basics of collision detection while providing references on the contemporary research efforts.
Real-Time Collision Detection (The Morgan Kaufmann Series in Interactive 3-D Technology)
There are a lot of optimizations can be used.
Firstly - any object (say with index i for example) is bounded by cube, with center coordinates CXi,CYi, and size Si
Secondly - collision detection works with estimations:
a) Find all pairs cubes i,j with condition: Abs(CXi-CXj)<(Si+Sj) AND Abs(CYi-CYj)<(Si+Sj)
b) Now we work only with pairs got in a). We calculate distances between them more accurately, something like Sqrt(Sqr(CXi-CXj)+Sqr(CYi-CYj)), objects now represented as sets of few numbers of simple figures - cubes, spheres, cones - and we using geometry formulas to check these figures intersections.
c) Objects from b) with detected intersections are processed as collisions with physics calculating etc.