Layouting a row of bricks on the wall in XNA 4 - xna-4.0

I am very new to XNA 3D programing, and my English is not very good!
My problem:
I have two 3D box model (that I created in 3Ds max), the bigger one for the wall and the smaller for the brick, I want to layout a row of bricks on my wall but I don't know how can I get the wall and bricks "corners" , "perfect BoundingBox" and the front side of them.
For getting bounding box i used the code from this tutorial, but this doesn't work well and calling BoundingBox.GetCorners() method doesn't give me the correct corners of my boxes.
How do I get the bounding box reliably?

Related

Three.js algorithm for marquee selecting in 3D scene

I'm creating a 3D model editor application using THREE.js where you can load a CAD model and have it display on the screen. You can pan, zoom, rotate the camera anywhere around in the scene to view the CAD model from any angle.
I want to add support to be able to draw an arbitrary rectangle on the screen (marquee select box) and anything inside this box I'd like to become selected.
What is a good algorithm to use for this operation?
My first thought was to take every loaded CAD part (that can be selected), and project its bounding box onto the screen. Then test each of these projected bounding boxes to the selection box drawn on the screen for matches. This should work, however I'm worried it would be very slow for large CAD models with 1000's of selectable parts.
Is there a better way to do marquee selections in 3D? Can raycasting somehow be used to speed up the selections?
Without knowing more details about your cad models it'll be a bit hard to give exactly relevant suggestions but I can suggest a few things I might try.
Use Hierarchical Bounding Boxes
If you have a multi-level tree of meshes you can generate bounding boxes for the non-leaf nodes of the tree. This isn't supported directly in THREE but you can manually create and check against these objects before checking if the child objects are within the marquee.
If your tree isn't spatially organized very well or is very flat then you can build an oct-tree and traverse the oct-tree nodes before checking the meshes.
Of course these data structures have to be updated whenever meshes move in your CAD model.
Cache World Bounds
If you cache versions of the bounding boxes on all the meshes in world space then instead of projecting the bounds into screen space you can create a frustum from the marquee in world space and check the all the mesh bounds without having to do any transformation of those boxes.
Asynchronous Checking
Instead of gathering all the intersected bounds on a single frame you could gather them up over multiple frames if it is taking a long time.
Unfortunately I don't think raycasting can do a whole lot for you here.
Hopefully that helps!

Draw grid on top of model on each face

I am a beginner in ThreeJS. I have spent quite a number of hours to figure out my requirements hence as a last venue I am here.
I am loading a 3D Shipping Container model using OBJLoader. I have added OrbitControls to rotate the camera.
My next task is to draw a grid on the required side of the model for the given number of cells and then highlight/shade particular cell with a given color.
The grid has to be fit within the length & height of the object side.
Please do give me guidelines/steps on how to achieve the above.
Any code samples would be great. I have gone through almost all of the ThreeJS Samples but none suits my requirement.

Unity and Infrared

I would like to make a game where I use a camera with infrared tracking, so that I can track peoples heads (from top view). For example each player will get a helmet so that the camera or infrared sensor can track him/her.
After that I need to know the exact positions of that person in unity, to place a 3D gameobject at the players position.
Maybe there is another workaround to get peoples positions in unity. I know I could use a kinect, but I need to track at least 10 people at the same time.
Thanks
Note: This is not really a closed answer, just a collection of my thoughts regarding your question on how to transfer recorded positions into unity.
If you really need full 3D positions, I believe you won't be happy when using only one sensor. In order to obtain depth information, which can further be used to calculate 3D positions in a reference coordinate system, you would have to use at least 2 sensors.
Another thing you could do is fixing the camera position and assuming, that all persons are moving in the same plane (e.g. fixed y-component), which would allow you to determine 3D positions utilizing the projection formula given the camera parameters (so camera has to be calibrated).
What also comes to my mind is: You could try to simulate your real camera with a virtual camera in unity. This way you can use the virtual camera to project image coordinates (coming from the real camera) into unity's 3D world. I haven't tried this myself, but there was someone who tried it, you can have a look at that: https://community.unity.com/t5/Editor/How-to-simulate-Unity-Pinhole-Camera-from-its-intrinsic/td-p/1922835
Edit given your comment:
Okay, sticking to your soccer example, you could proceed as follows:
Setup: Say you define your playing area to be rectangular with its origin in the bottom left corner (think of UVs). You set these points in the real world (and in unitys representation of it) as (0,0) (bottom left) and (width, height) (top right), choosing whichever measure you like (e.g. meters, as this is unitys default unit). As your camera is stationary, you can assign the corresponding corner points in image coordinates (pixel coordinates) as well. To make things easier, work with normalized coordinates instead of pixels, thus bottom left is (0,0) ans top right is (1,1).
Tracking: When tracking persons in the image, you can calculate their normalized position (x,y) (with x and y in [0,1]). These normalized positions can be transferred into unitys 3D space (in unity you will have a playable area of the same width and height) by simply calculating a Vector3 as (x*widht, 0, y*height) (in unity x is pointing right, y is pointing up and z is pointing forward).
Edit on Tracking:
For top-view tracking in a game, I would say you are on the right track with using some sort of helmet, which enables you to use some sort of marker based tracking (in my opinion markerless multi-target tracking is not reliable enough for use in a video game) (if you want learn more about object tracking, there are lots of resources in the field of computer vision).
Independent of the sensor you are using (IR or camera), you would go create some unique marker for each helmet, thus enabling you to identify each helmet (and also the player). A marker in that case is some sort of unique pattern, that can be recognized by an algorithm for each recorded frame. In IR you can arrange quadratic IR markers to form a specific pattern and for normal cameras you can use markers like QR codes (there are also libraries for augmented reality related content, that offer functionality for creating and recognizing markers, e.g. ArUco or ARToolkit, although I don't know if they offer C# libraries, I have only used ArUco with c++ a while ago).
When you have your markers of choice, the tracking procedure is then pretty straightforward, for each recorded image:
- detect all markers in the current image (these correspond to all players currently visible)
- follow the steps from my last edit using the detected positions
I hope that helps, feel free to contact me again.

Isometric Sprites

This might be a stupid question but I'm stuck and can't get passed it. I'm making a isometric game and I have my map built using tiles, I just followed this tutorial to build the map, http://www.binpress.com/tutorial/creating-a-city-building-game-with-sfml/137. But now I don't know how to add character sprites. Do I have to add these sprites using tiles as well or do I just draw the the sprites into position of the screen. Any help would be much appreciated.
As far as I can tell from the engine, just follow the "Textures and Animations" guide and draw the Animation to the screen after you have drawn the tiles. This isn't a complicated engine, so you are only working with 2D sprites being drawn to the screen (the 3D effect is merely tricks of painter's algorithm to make it work...there is no z-axis from what the tutorial indicates)
The depth is done by the order of tile rendering
The same goes for objects,players,etc... Let assume plane XY is parallel with the ground and Z axis is the altitude. Then your grid would be something like this (assuming diamond shape layout):
Order of rendering
You have to handle object,players and stuff sprites in the same way as tiles (and in the same time). so you should render all cells in specific order dependent on your grid layout and sprite combination equation. If your sprites can overwrite already rendered stuff then you should render from the most distant tiles to the closest to the "camera". In that case the blue direction arrow on above image is correct and Z axis should be increasing in the most inner loop.
So now if you got any object,player or stuff placed in cell (x,y,z) then you should render it directly after the cell (x,y,z) was rendered prior to rendering any other cell.
To speed up is a good idea to have objects and players in your tile map as a cell. But for that you have to have the tiles in the right manner and also your map representations must be capable of doing so.

Implementing terrains in XNA similar to Battle Zone (1980)

I am developing a 3D game for Windows Phone that includes terrains and volcanoes at infinite distance similar to Battle Zone (1980) by Atari Inc. The player can never touch the terrains no matter how far player drives. Currently, to implement this I am mapping a 2D texture inside the wall of cylinder. The cylinder is also moving with the player so that the player can never reach terrains. I am not sure whether this is a good method to implement terrains as I am facing problems like distortion of texture when mapping it on the wall of cylinder.
Please suggest me methods to implement a view of terrains in XNA similar to Battle Zone?
normally instead of cylinder developers use box (so-called SkyBox)
It has less polygons and in general less distortion (could be some at edges)
To make it look more real some devs like Valve use off-screen render in first pass that include skybox + some distant models with low details and moving cloud sprites or textured ring with alpha. Both points of view are synchronised (main camera and off-screen camera) then (without clearing colour buffer) they render final scene on top. Thanks to that far building will move a bit and scene surrounding will look less plain. To avoid z-buffer cleaning between passes they simply doing first pass under the floor(literally) of the scene of main pass.

Resources