if i do a human model and import him to game engine. does game engine knows all point cordinates on model and rotates each ones? all models consists million points and and if i rotate a model 90 degree , does game engine calculates millions point new location and rotate? how does it works. Thanks
This is a bit of a vague question since each game engine will work differently, but in general the game engine will not touch the model coordinates.
Models are usually loaded with model space (or local space) coordinates - this simply means that each vertex is defined with a location relative to the origin of that model. The origin is defined as (0,0,0) and is the point around which rotations take place.
Now the game engine loads and keeps the model in this coordinate space. Then you provide your transformations (such as translation and rotation matrices) to place that model somewhere in your "world" (i.e. the global coordinate space shared by all objects). You also provide the way you want to view this world with various other transforms such projection and view matrices.
The game engine then takes all of these transformations and passes them to the GPU (or software renderer, in some cases) - it will also setup other stuff such as textures, etc. These are usually set once per frame (or per object for a frame).
Finally, it then passes each vertex that needs to be processed to the renderer. Each vertex is then transformed by the renderer using all the transformations specified to get a final vertex position - first in world space and then in screen space - which it can use to render pixels based on various other information (such as textures and lighting).
So the point is, in most cases, the engine really has nothing to do with the rotation of the model/vertices. It is simply a way to manage the model and the various settings that apply to it.
Of course, the engine can rotate the model and modify it's vertices, but this is usually only done during loading - for example if the model needs to be converted between different coordinate spaces.
There is a lot more going on, and this is a very basic description of what actually happens. There are many many sources that describe this process in great detail, so I won't even try to duplicate it. Hopefully this gives you enough detail to understand the basics.
Related
I’m building a Starflight-inspired 2D space exploration game with a procedural world. The gameplay is divided into different « scenes » (to use Godot terminology) to manage the different « depths » of the game. For example, interstellar flight is a scene where the star systems are simply represented by star sprites. When the player gets in range, the view is moved to the solar system scene, where the player moves his ship inside the actual solar system.
So far so good, I generate the universe (the solar systems) from a hard coded array of coordinates and seeds. Now I also want to make the universe generation procedural, but I’m guessing that loading a whole universe (there is no real limit to the number of solar systems once it becomes procedural) in memory won’t be efficient.
I’m thinking of generating the universe on the first run and saving the data to a file, but I’m wondering how to load the relevant data in an efficient way that would let me load only a certain « radius » of data around the player’s ship. I feel like it would be the way to go if I use my generation algorithms that generate « realistic » galaxy shapes, since it implies many steps of data processing (different cluster shapes are generated, arms, blobs, etc. and then stars are spinned around the center to simulate the galaxy rotation, etc.) that would be probably too long to calculate in realtime.
I’m wondering which approach I should take on this problem. It’s not really language or engine dependant, so references to generic articles and algorithms on the subject would suffice.
I also read a bit about QuadTrees and I think I’m getting to something there, but I’m not exactly sure how to use that with a file on disk.
Thanks in advance for your help!
I have some suggestions:
Do not generate the whole universe on the first run, generate only the areas that are somehow visible. Then, instead of loading the whole universe from disk, you just generate it whenever your spaceship (or whatever) come within view distance of that area. This makes game initialization much faster and allows an (almost) infinite universe.
If you want the universe to be modifiable, store only the `edits' that a player makes. So if you want to show a part of the universe, generate the part from your seed and then overlay the stored edits. This makes storage much smaller.
For storage on disk, have a look at R-Tree, especially R*Tree and R+Tree, they are designed for storing data in disk pages.
as TilmannZ suggested, you should not be generating the whole dataset for the galaxy when you start the game, because there is likely no need (unless the player needs to see/interact with all the data at once - e.g. all stars). If this is the case, for example for a starmap, then you may be better loading all the data once and saving the result in an image file.
Instead, you should only genereated the data as needed around the player. The most obvious way to do this would be to construct a grid around the player, and keep this grid centered on the player as they move around. As the player moves around, you only need to update the conceptual galaxy coordinates of each cell (not the rendered coordinates). Then for each cell you can then use the coordinates as the input into a value or gradient generator like Perlin to determine what features should spawn in that location.
As for 'shaping' the galaxy or universe, one effective way is to sample the pixel data of a greyscale image of a galaxy which has the shape you want. You could load the image's RGB data at run time, and use the coordinates of your grid as you generate the stars to get the RGB value, which you can use as a density factor for the star generation; the whiter the pixel, the higher the star density at this location and visa-versa for black pixels. This method lets you effectively draw the shape of the galaxy in paint.
Maybe think about different layers of abstractions. Each layer uses the parent layer, designer input, events & procedural generation algorithms to generate the needed data.
The Universe layer contains user or randomly placed galaxy polygons & types.
The Galaxy layers can add more details (number & density of spiral arms) or a density map.
A cluster of solar systems.
The solar system adds the stars & planets.
And only create the details for currently needed elements.
I'm developing a simple game, where user can place different but modular objects (for instance: tracks, road etc).
My question is: how to match and place different object when placed one near the other ?
My first approach is to create an hidden child object (a box) for each module objects, and put it in the border where is possible to place other object (see my image example), so i can use that coordinates (x,y,z) to align other object.
But i don't know if the best approach.
Thanks
Summary:
1.Define what is a "snapping point"
2.Define which is your threshold
3.Update new game object position
Little Explanation
1.
So I suppose that you need a way to define which parts of the object are the "snapping points".
Cause they can be clear in some examples, like a Cube, where the whole vertex could be snapping points, but it's hard to define that every vertex in amorphous objects.
A simple solution could be the one exposed by #PierreBaret, whic consists in define on your transform component which are the "snapping points".
The other one is the one you propouse, creating empty game objects that will act as snapping points locations on the game object.
2.After having those snaped points, when you will drop your new gameObject, you need to define a threshold, as long as you don't want that every object snaps allways to the nearest game object.
3.So you define a minimum distance between snapping points, so if your snapping point is under that threshold, you will need to update it's position, to adjust to the the snapped point.
Visual Representation:
Note: The Threshold distance is showing just ONE of the 4 current threshold checks on the 4 vertex in the square, but this dark blue circle should be repilcate 3 more times, one for each green snapping point of the red square
Of course this method seems expensive, you can make some improvements like setting a first threshold between gameobjects, and if the gameObject is inside this threshold, then check snapping threshold distance.
Hope it helps!
Approach for arbitrary objects/models and deformable models.
[A] A physical approach would consider all the surfaces of the 2 objects, and you might need to check that objects don't overlap, using dot products between surfaces. That's a bit more expensive computing, but nothing nasty. If there is no match involved here, you'll be able to add matching features (see [B]). However, that's the only way to work with non predefined models or deformable models.
Approaches for matching simple and complex models
[B] Snapping points are a good thing but it's not sufficient alone. I think you need to make an object have:
a sparse representation (eg., complex oriented sphere to a cube),
and place key snapping points,
tagged by polarity or color, and eventually orientation (that's oriented snapping points); eg., in the case of rails, you'll want rails to snap {+} with {+} and forbid {+} with {-}. In the case of a more complex object, or when you have several orientations (eg., 2 faces of a surface, but only one is candidate for an pair of objects matching) you'll need more than 2 polarities, but 3 different ones per matching candidate surface or feature therefore the colors (or any enumeration). You need 3 different colors to make sure there is a unique 3D space configuration. You create something that is called in chemistry an enantiomer.
You can also use point pair features that describes the relative
position and orientation of two oriented points, when an oriented
surface is not appropriate.
References
Some are computer vision papers or book extracts, but they expose algorithms and concepts to achieve what I developed in my answer.
Model Globally, Match Locally: Efficient and Robust 3D Object Recognition, Drost et al.
3D Models and Matching
I think this requires a bit of background information:
I have been modding Minecraft for a while now, but I alway wanted to make my own game, so I started digging into the freshly released LWJGL3 to actually get things done. Yes, I know it's a bit ow level and I should use an engine and stuff...indeed, I already tried some engines and they never quite match what I want to do, so I decided I want to tackle the problem at its root.
So far, I kind of understand how to render meshes, move the "camera", etc. and I'm willing to take the learning curve.
But the thing is, at some point all the tutorials start to explain how to load models and create skeletal animations and so on...but I think I do not really want to go that way. A lot of things in working with Minecraft code was awful, but I liked how I could create models and animations from Java code. Sure, it did not look super realistic, but since I'm not great with Blender either, I doubt having "classic" models and animations would help. Anyway, in that code, I could rotate a box around to make a creature look at a player, I could use a sinus function to move legs and arms (or wings, in my case) and that was working, since Minecraft used immediate mode and Java could directly tell the graphics card where to draw each vertex.
So, actual question(s): Is there any good way to make dynamic animations in modern (3.3+) OpenGL? My models would basically be a hierarchy of shapes (boxes or whatever) and I want to be able to rotate them on the fly. But I'm not sure how to organize that. Would I store all the translation/rotation-matrices for each sub-shape? Would that put a hard limit on the amount of sub-shapes a model could have? Did anyone try something like that?
Edit: For clarification, what I did looked something like this:
Create a model: https://github.com/TheOnlySilverClaw/Birdmod/blob/master/src/main/java/silverclaw/birds/client/model/ModelOstrich.java
The model is created as a bunch of boxes in the constructor, the render and setRotationAngles methods set scale and rotations.
You should follow one opengl tutorial in order to understand the basics.
Let me suggest "Learning Modern 3D Graphics Programming", and especially this chapter, where you move one robot arm with multiple joints.
I did a port in java using jogl here, but you can easily port it over lwjgl.
What you are looking for is exactly skeletal animation, the only difference being the fact you do not want to load animations for your bones but want to compute / generate transforms on the fly.
You basically have a hierarchy of bones, and geometry attached to it. It looks like you want to manipulate this geometry "rigidly", so before sending your meshes / transforms to the GPU (the classic way), you want to start by computing the new transforms in model or world space, then send those freshly computed matrices to draw your geometries on the gpu the standard way.
As Sorin said, to compute each transform you simply have to iterate over your hierarchy and accumulate transforms given the transform of the parent bone and your local transform w.r.t the parent.
Yes and no.
You can have your hierarchy of shapes and store a relative transform for each.
For example the "player" whould have a translation to 100,100, 10 (where the player is), and then the "head" subcomponent would have an additional translation of 0,0,5 (just a bit higher on the z axis).
You can store these as matrices (they can encode translation, roation and scaling) and use glPushMatrix and glPop matrix to add and remove a matrix to a stack maintained by openGL.
The draw() function(or whatever you call it) should look something like :
glPushMatrix();
glMultMatrix(my_transform); // You can also just have glTranslate, glRotate or anything else.
// Draw my mesh
for (child : children) { child.draw(); }
glPopMatrix();
This gives you a hierarchical setup so that objects move with their parent. Alternatively you can have a stack in the main memory and do the multiplications yourself (use a library). I think the openGL stack may have a limit (implementation dependent), but if you handle it yourself the only limit is the amount of ram you can use. Once all the matrices are multiplied rendering is done in the same amount of time, that is it doesn't matter for performance how deep a mesh is in the hierarchy.
For actual animations you need to compute the intermediate transformations. For example for a crouch animation you probably want to have a few frames in between so that the camera doesn't just jump to the low position. You can do this with a time based linear interpolation between the start and end positions, but this only covers simple animations and you still have to implement it yourself.
Anything more complicated (i.e. modify the mesh based on the bone links) you would need to implement yourself.
What algorithms are used for augmented reality like zookazam ?
I think it analyze image and find planes by contrast, but i don't know how.
What topics should I read before starting with app like this?
[Prologue]
This is extremly broad topic and mostly off topic in it's current state. I reedited your question but to make your question answerable within the rules/possibilities of this site
You should specify more closely what your augmented reality:
should do
adding 2D/3D objects with known mesh ...
changing light conditions
adding/removing body parts/clothes/hairs ...
a good idea is to provide some example image (sketch) of input/output of what you want to achieve.
what input it has
video,static image, 2D,stereo,3D. For pure 2D input specify what conditions/markers/illumination/LASER patterns you have to help the reconstruction.
what will be in the input image? empty room, persons, specific objects etc.
specify target platform
many algorithms are limited to memory size/bandwidth, CPU power, special HW capabilities etc so it is a good idea to add tag for your platform. The OS and language is also a good idea to add.
[How augmented reality works]
acquire input image
if you are connecting to some device like camera you need to use its driver/framework or something to obtain the image or use some common API it supports. This task is OS dependent. My favorite way on Windows is to use VFW (video for windows) API.
I would start with some static file(s) from start instead to ease up the debug and incremental building process. (you do not need to wait for camera and stuff to happen on each build). And when your App is ready for live video then switch back to camera...
reconstruct the scene into 3D mesh
if you use 3D cameras like Kinect then this step is not necessary. Otherwise you need to distinguish the object by some segmentation process usually based on the edge detections or color homogenity.
The quality of the 3D mesh depends on what you want to achieve and what is your input. For example if you want realistic shadows and lighting then you need very good mesh. If the camera is fixed in some room you can predefine the mesh manually (hard code it) and compute just the objects in view. Also the objects detection/segmentation can be done very simply by substracting the empty room image from current view image so the pixels with big difference are the objects.
you can also use planes instead of real 3D mesh as you suggested in the OP but then you can forget about more realistic quality of effects like lighting,shadows,intersections... if you assume the objects are standing straight then you can use room metrics to obtain the distance from camera. see:
selection criteria for different projections
estimate measure of photographed things
For pure 2D input you can also use the illumination to estimate the 3D mesh see:
Turn any 2D image into 3D printable sculpture with code
render
Just render the scene back to some image/video/screen... with added/removed features. If you are not changing the light conditions too much you can also use the original image and render directly to it. Shadows can be achieved by darkening the pixels ... For better results with this the illumination/shadows/spots/etc. are usually filtered out from the original image and then added directly by rendering instead. see
White balance (Color Suppression) Formula?
Enhancing dynamic range and normalizing illumination
The rendering process itself is also platform dependent (unless you are doing it by low level graphics in memory). You can use things like GDI,DX,OpenGL,... see:
Graphics rendering
You also need camera parameters for rendering like:
Transformation of 3D objects related to vanishing points and horizon line
[Basic topics to google/read]
2D
DIP digital image processing
Image Segmentation
3D
Vector math
Homogenous coordinates
3D scene reconstruction
3D graphics
normal shading
paltform dependent
image acquisition
rendering
Given a set of 2d images that cover all dimensions of an object (e.g. a car and its roof/sides/front/read), how could I transform this into a 3d objdct?
Is there any libraries that could do this?
Thanks
These "2D images" are usually called "textures". You probably want a 3D library which allows you to specify a 3D model with bitmap textures. The library would depend on platform you are using, but start with looking at OpenGL!
OpenGL for PHP
OpenGL for Java
... etc.
I've heard of the program "Poser" doing this using heuristics for human forms, but otherwise I don't believe this is actually theoretically possible. You are asking to construct volumetric data from flat data (inferring the third dimension.)
I think you'd have to make a ton of assumptions about your geometry, and even then, you'd only really have a shell of the object. If you did this well, you'd have a contiguous surface representing the boundary of the object - not a volumetric object itself.
What you can do, like Tomas suggested, is slap these 2d images onto something. However, you still will need to construct a triangle mesh surface, and actually do all the modeling, for this to present a 3D surface.
I hope this helps.
What there is currently that can do anything close to what you are asking for automagically is extremely proprietary. No libraries, but there are some products.
This core issue is matching corresponding points in the images and being able to say, this spot in image A is this spot in image B, and they both match this spot in image C, etc.
There are three ways to go about this, manually matching (you have the photos and have to use your own brain to find the corresponding points), coded targets, and texture matching.
PhotoModeller, www.photomodeller.com, $1,145.00US, supports manual matching and coded targets. You print out a bunch of images, attach them to your object, shoot your photos, and the software finds the targets in each picture and creates a 3D object based on those points.
PhotoModeller Scanner, $2,595.00US, adds texture matching. Tiny bits of the the images are compared to see if they represent the same source area.
Both PhotoModeller products depend on shooting the images with a calibrated camera where you use a consistent focal length for every shot and you got through a calibration process to map the lens distortion of the camera.
If you can do manual matching, the Match Photo feature of Google SketchUp may do the job, and SketchUp is free. If you can shoot new photos, you can add your own targets like colored sticker dots to the object to help you generate contours.
If your images are drawings, like profile, plan view, etc. PhotoModeller will not help you, but SketchUp may be just the tool you need. You will have to build up each part manually because you will have to supply the intelligence to recognize which lines and points correspond from drawing to drawing.
I hope this helps.