Let's say I have dummy character model. This character is standing. Also there is animation that make this character sit. When I save character to obj and import it into another software (eg. blender) I get standing model. What I want is to get sitting character (in blender). Is this even possible to save object in after-animation state?
I have very little experience in three.js and 3d-modeling concepts, will appreciate any help
the first thing you need to do is obj.UpdateMatrixWorld(). If this alone doesnt solve the problem, you probably have to clone the vertices and apply the matrix of the object to it.
var vector = obj.geometry.vertices[i].clone();
vector.applyMatrix4( obj.matrixWorld );
And vector you write to the obj file as vertex. As far as i know this shouldnt affect faces or texture coords
Related
So I'm not sure if this is possible, but I am trying to adapt the physics body to go with my image. Many people have told me that the hitbox for the character in my game on the iOS App Store (StreakDash - go download it!) does not work well, since it makes the game a lot harder (the hitbox hits an obstacle even though the character doesn't appear to even be touching it). This is due to the fact that the hitboxes are rectangular, whereas my character has a strange shape. Are there any ways to go around this? I thought of some ways that didn't completely work (e.g. trying to change the actual frame/canvas shape from rectangular to something else, trying to change the hitbox shape/size, trying to alter the image, etc.). It would be great if I could have advice on whether or not I should even change the hitbox in the first place (is it the type of rage-inducing game that makes people want to keep playing or stop?). But finding a way to solve the problem would be best!
Here is a picture of my character with its hitbox:
Here is just some basic code of with the SKSpriteNode and physics body:
sprite = [SKSpriteNode spriteNodeWithImageNamed:#"stick"];
sprite.size = CGSizeMake(self.frame.size.width/6.31, self.frame.size.height/3.2);
sprite.physicsBody =[SKPhysicsBody bodyWithRectangleOfSize:CGSizeMake (sprite.size.width, sprite.size.height)];
sprite.position = CGPointMake(self.frame.size.width/5.7, self.frame.size.height/2.9);
sprite.physicsBody.categoryBitMask = personCategory;
sprite.physicsBody.contactTestBitMask = lineCategory;
sprite.physicsBody.dynamic = NO;
sprite.physicsBody.collisionBitMask = 0;
sprite.physicsBody.usesPreciseCollisionDetection = YES;
I suggest to represent the physic body with a polygon shape. You have two ways to improve bodyWithRectangleOfSize:
The Lazy way is to use bodyWithTexture:size: which will create a physics body from the contents of a texture. But as Apple suggested, the more complex your physic body is , the more work to be properly simulated. You may want to make a tradeoff between precision and performance.
The more proper way is to represent the bounding of your sprite with a convex polygon shape. See bodyWithPolygonFromPath:. There are some tools online to generate the path code in user interface. Here is the one: SKPhysicsBody Path Generator (be careful with the offset and anchor point). If you know the way to generate CGMutablePathRef code yourself, it will be easier to fit your situation.
My game has a very simplistic retro pixel style, where all the models use flat mapping (box unwrap) for the models. The unwrap is always the same process in my modeling program: selecting a box unwrap modifier with the same settings.
This gets tedious as I need to explain other people how to unwrap and we all make mistakes sometimes or forget to unwrap some part of a mesh, requiring a full re-export.
It would be better if I could code this somehow, so other people don't have to mess around with the UV's and can just focus on the model. The model gets materials assigned automatically in-game, just the UV's should ideally be generated on the fly when I load the models in three.js.
Any ideas?
maybe check this link:
https://github.com/mrdoob/three.js/issues/2065#issuecomment-6352320
this is about planar mapping and only works for Three.Face3, triangles but it is very easy to add the fourth UV-coordinate. If you got this working with the right scale based on your object, you can then iterate over all of your object's face normals and check the side they are most facing to. And then, you do this planar mapping algorithm for every side and voila, box mapping :) Hope this helps!
Right, I'm making a 2d car racing game. So far I've got the car moving etc (with a little help of course) and was wondering how do I go about adding collision detection in XNA. I've taken a bumper part (from the whole track), and made it as a separate .png file. And I was thinking of adding a collision detection box around it (so if 'car' hits 'bumper' move back by so and so). How do I add collison detection to the bumper, and integrate it with the car? Thank you!
Try this tutorial: http://create.msdn.com/en-US/education/catalog/tutorial/collision_2d_perpixel
Source code for the tutorial is in Downloads, under the two (ugly) blue boxes.
It seems this would be a lot easier if the cars were always straight--not rotated. If they were rotated you wouldn't be able to use rectangles to help.
If you do, then you could instead have the bumper included in car.png.
Then you could use the coordinates of the car and add certain values to get the length and width of the region for the bumper.
You can then do
Rectangle bumperBoundingBox = new Rectangle
(
(int)X_COORDINATE_OF_CAR,
(int)Y_COORDINATE_OF_CAR,
(int)X_COORDINATE_OF_CAR + WIDTH_OF_BUMPER,
(int)Y_COORDINATE_OF_CAR + HEIGHT_OF_BUMPER
);
Rectangle otherCarBoundingBox = new Rectangle( \* x, y, ... *\ );
bool carIsTouchingBumper = otherCarBoundingBox.Intersects(bumperBoundingBox);
This may not be perfectly perfect, as in the parameters for rectangle might be in a different order or something like that. But once you have it, you can use carIsTouchingBumper and do stuff.
If you want the bumper to be a separate image, you could do the same thing as above, except use the coordinates of the bumper instead. Also you would have to make the bumper follow the car.
Given a set of 2d images that cover all dimensions of an object (e.g. a car and its roof/sides/front/read), how could I transform this into a 3d objdct?
Is there any libraries that could do this?
Thanks
These "2D images" are usually called "textures". You probably want a 3D library which allows you to specify a 3D model with bitmap textures. The library would depend on platform you are using, but start with looking at OpenGL!
OpenGL for PHP
OpenGL for Java
... etc.
I've heard of the program "Poser" doing this using heuristics for human forms, but otherwise I don't believe this is actually theoretically possible. You are asking to construct volumetric data from flat data (inferring the third dimension.)
I think you'd have to make a ton of assumptions about your geometry, and even then, you'd only really have a shell of the object. If you did this well, you'd have a contiguous surface representing the boundary of the object - not a volumetric object itself.
What you can do, like Tomas suggested, is slap these 2d images onto something. However, you still will need to construct a triangle mesh surface, and actually do all the modeling, for this to present a 3D surface.
I hope this helps.
What there is currently that can do anything close to what you are asking for automagically is extremely proprietary. No libraries, but there are some products.
This core issue is matching corresponding points in the images and being able to say, this spot in image A is this spot in image B, and they both match this spot in image C, etc.
There are three ways to go about this, manually matching (you have the photos and have to use your own brain to find the corresponding points), coded targets, and texture matching.
PhotoModeller, www.photomodeller.com, $1,145.00US, supports manual matching and coded targets. You print out a bunch of images, attach them to your object, shoot your photos, and the software finds the targets in each picture and creates a 3D object based on those points.
PhotoModeller Scanner, $2,595.00US, adds texture matching. Tiny bits of the the images are compared to see if they represent the same source area.
Both PhotoModeller products depend on shooting the images with a calibrated camera where you use a consistent focal length for every shot and you got through a calibration process to map the lens distortion of the camera.
If you can do manual matching, the Match Photo feature of Google SketchUp may do the job, and SketchUp is free. If you can shoot new photos, you can add your own targets like colored sticker dots to the object to help you generate contours.
If your images are drawings, like profile, plan view, etc. PhotoModeller will not help you, but SketchUp may be just the tool you need. You will have to build up each part manually because you will have to supply the intelligence to recognize which lines and points correspond from drawing to drawing.
I hope this helps.
I am still shiny new to XNA, so please forgive any stupid question and statements in this post (The added issue is that I am using Visual Studio 2010 with .Net 4.0 which also means very few examples exist out on the web - well, none that I could find easily):
I have two 2D objects in a "game" that I am using to learn more about XNA. I need to figure out when these two objects intersect.
I noticed that the Texture2D objects has a property named "Bounds" which in turn has a method named "Intersects" which takes a Rectangle (the other Texture2D.Bounds) as an argument.
However when you run the code, the objects always intersect even if they are on separate sides of the screen. When I step into the code, I noticed that for the Texture2D Bounds I get 4 parameters back when you mouse over the Bounds and the X, and Y coordinates always read "X = 0, Y = 0" for both objects (hence they always intersect).
The thing that confuses me is the fact that the Bounds property is on the Texture rather than on the Position (or Vector2) of the objects. I eventually created a little helper method that takes in the objects and there positions and then calculate whether they intersect, but I'm sure there must be a better way.
any suggestions, pointers would be much appreciated.
Gineer
The Bounds property was added to the Texture2D class to simplify working with Viewports. More here.
You shouldn't think of the texture as being the object itself, it's merely what holds the data that gets drawn to the screen, whether it's used for a Sprite or RenderTarget. The position of objects or sprites and how position/moving is handled is entirely up to you, so you have to track and handle this yourself. That includes the position of any bounds.
The 2D Rectangle Collision tutorial is a good start, as you've already found :)
I found the XNA Creator Club tutorials based on another post to stackoverflow by Ben S. The Collision Series 1: 2D Rectangle Collision tutorial explains it all.
It seems you have to create new rectangles, based on the original rectangles moving around in the game every time you try to run the intersection method, which then will contain the updated X and Y coordinates.
I am still not quite sure why the original object rectangles position can not just be kept up to date, but if this is the way it should work, that's good enough for me... for now. ;-)