three.js Geometry.faceVertexUvs setup is confusing - three.js

I am working on implementing a custom model importer. The file contains all the necessary info (vertices, vertex normals, vertex uv coordinates, materials, etc.)
It's important to note that the file contains models with multiple materials.
The setup of the vertices is fairly straight forward and is working correctly.
The setup of the faces I'm not 100% sure about. I do the following:
meshDict[name].faces.push(
new THREE.Face3(parseInt(triangles[t]),
parseInt(triangles[t + 1]),
parseInt(triangles[t + 2]),
[normals[parseInt(triangles[t])],
normals[parseInt(triangles[t + 1])],
normals[parseInt(triangles[t + 2])]],
new THREE.Vector3(1, 1, 1), matIndex));
Here: t is the index iterator of the triangles array, normals is an array that holds the normal information of the vertices and matIndex is the material index of the face based on the sub-mesh number from the object file. This also seems to work correctly.
Now for the hard part. I searched all day for a clear explanation and/or good example of how to set-up the faceVertexUvs of a multi material mesh, but every second post I found shows a different method to set this up. After a lot of trial and error I got to this solution that works, but throws a LOT of warnings...
for (var f = 0; f < faces.length; f++)
{
if (currentMesh.faceVertexUvs[faces[f].materialIndex] == undefined)
{
currentMesh.faceVertexUvs[faces[f].materialIndex] = []
faceOffset = (faces[f].materialIndex == 0? 0 : 1) * f;
}
currentMesh.faceVertexUvs[faces[f].materialIndex].push(f);
currentMesh.faceVertexUvs[faces[f].materialIndex][f - faceOffset] = [];
currentMesh.faceVertexUvs[faces[f].materialIndex][f - faceOffset].push(uvs[faces[f].a]);
currentMesh.faceVertexUvs[faces[f].materialIndex][f - faceOffset].push(uvs[faces[f].b]);
currentMesh.faceVertexUvs[faces[f].materialIndex][f - faceOffset].push(uvs[faces[f].c]);
}
Here uvs is an array of Vector2 of the same length as the vertices array.
Basically I am doing:
faceVertexUvs[materialIndex][faceIndex][uvs[a], uvs[b], uvs[c]].
The number of material indexes is equal to the number of sub-meshes that the object has.
So this solution kinda works OK, but some of the textures are not looking correct (I suspect because the UV mapping of that area is not being set correctly), and I am getting a lot of warnings that say:
Important to note that all the models look OK in the exporting program so the issue is not from there.
Any ideas as to what I am doing wrong here?

So I think I finally managed to figure it out and I got it working correctly, and since the documentation is severely lacking in this area, and the examples are not quite clear I'll post my understanding of this topic here for anyone having the same issue.
So it goes like this:
Geometry.faveVertexUvs[UV LAYER][face index][uv[face.a], uv[face.b], uv[face.c]]
As far as I understand unless you have an AO (ambient occlusion) map, you only use UV LAYER 0.
Now, if you define a material index when setting up the faces of the geometry then each face will be rendered with the corresponding material, and there is no need to split the UVs into separate areas like I was doing in my question. So you only have to use:
Geometry.faces.push(new THREE.Face3(v0, v1, v2, [n0, n1, n2], vertexColor, materialIndex));

Related

Change facing direction of CSS3DObject

I have a 3D scene with a bunch of CSS object that I want to rotate so that they are all pointing towards a point in the space.
My CSS objects are simple rectangles that are a lot wider than they are high:
var element = document.createElement('div');
element.innerHTML = "test";
element.style.width = "75px";
element.style.height = "10px";
var object = new THREE.CSS3DObject(element);
object.position.x = x;
object.position.y = y;
object.position.z = z;
Per default, the created objects are defined as if they are "facing" the z-axis. This means that if I use the lookAt() function, the objects will rotate so that the "test" text face the point.
My problem is that I would rather rotate so that the "right edge" of the div is pointing towards the desired point. I've tried fiddling with the up-vector, but I feel like that wont work because I still want the up-vector to point up. I also tried rotating the object Math.PI/2 along the y axis first, but lookAt() seems to ignore any prior set rotation.
It seems like I need to redefine the objects local z-vector instead, so that it runs along with the global x-vector. That way the objects "looking at"-direction would be to the right in the scene, and then lookAt() would orient it properly.
Sorry for probably mangling terminology, newbie 3D programmer here.
Object.lookAt( point ) will orient the object so that the object's internal positive z-axis points in the direction of the desired point.
If you want the object's internal positive x-axis to point in the direction of the desired point, you can use this pattern:
object.lookAt( point );
object.rotateY( - Math.PI / 2 );
three.js r.84

libGDX- Exact collision detection - Polygon creation?

I've got a question about libGDX collision detection. Because it's a rather specific question I have not found any good solution on the internet yet.
So, I already created "humans" that consist of different body parts, each with rectangle-shaped collision detection.
Now I want to implement weapons and skills, which for example look like this:
Skill example image
Problem
Working with rectangles in collision detections would be really frustrating for players when there are skills like this: They would dodge a skill successfully but the collision detector would still damage them.
Approach 1:
Before I started working with Libgdx I have created an Android game with a custom engine and similar skills. There I solved the problem following way:
Detect rectangle collision
Calculate overlapping rectangle section
Check every single pixel of the overlapping part of the skill for transparency
If there is any non-transparent pixel found -> Collision
That's a kind of heavy way, but as only overlapping pixels are checked and the rest of the game is really light, it works completely fine.
At the moment my skill images are loaded as "TextureRegion", where it is not possible to access single pixels.
I have found out that libGDX has a Pixmap class, which would allow such pixel checks. Problem is: having them loaded as Pixmaps additionally would 1. be even more heavy and 2. defeat the whole purpose of the Texture system.
An alternative could be to load all skills as Pixmap only. What do you think: Would this be a good way? Is it possible to draw many Pixmaps on the screen without any issues and lag?
Approach 2:
An other way would be to create Polygons with the shape of the skills and use them for the collision detection.
a)
But how would I define a Polygon shape for every single skill (there are over 150 of them)? Well after browsing a while, I found this useful tool: http://www.aurelienribon.com/blog/projects/physics-body-editor/
it allows to create Polygon shapes by hand and then save them as JSON files, readable by the libGDX application. Now here come the difficulties:
The Physics Body Editor is connected to Box2d (which I am not using). I would either have to add the whole Box2d physics engine (which I do not need at all) just because of one tiny collision detection OR I would have to write a custom BodyEditorLoader which would be a tough, complicated and time-intensive task
Some images of the same skill sprite have a big difference in their shapes (like the second skill sprite example). When working with the BodyEditor tool, I would have to not only define the shape of every single skill, but I would have to define the shape of several images (up to 12) of every single skill. That would be extremely time-intensive and a huge mess when implementing these dozens of polygon shapes
b)
If there is any smooth way to automatically generate Polygons out of images, that could be the solution. I could simply connect every sprite section to a generated polygon and check for collisions that way. There are a few problems though:
Is there any smooth tool which can generate Polygon shapes out of an image (and does not need too much time therefor)?
I don't think that a tool like this (if one exists) can directly work with Textures. It would probably need Pixmaps. It would not be needed to keep te Pixmaps loaded after the Polygon creation though. Still an extremely heavy task!
My current thoughts
I'm stuck at this point because there are several possible approaches but all of them have their difficulties. Before I choose one path and continue coding, it would be great if you could leave some of your ideas and knowledge.
There might be helpful classes and code included in libGDX that solve my problems within seconds - as I am really new at libGDX I just don't know a lot about it yet.
Currently I think I would go with approach 1: Work with pixel detection. That way I made exact collision detections possible in my previous Android game.
What do you think?
Greetings
Felix
I, personally, would feel like pixel-to-pixel collision would be overkill on performance and provide some instances where I would still feel cheated - (I got hit by the handle of the axe?)
If it were me, I would add a "Hitbox" to each skill. StreetFighter is a popular game which uses this technique. (newer versions are in 3D, but hitbox collision is still 2D) Hitboxes can change frame-by-frame along with the animation.
Empty spot here to add example images - google "Streetfighter hitbox" in the meantime
For your axe, there could be a defined rectangle hitbox along the edge of one or both ends - or even over the entire metal head of the axe.
This keeps it fairly simple, without having to mess with exact polygons, but also isn't overly performance heavy like having every single pixel being its own hitbox.
I've used that exact body editor you referenced and it has the ability to generate polygons and/or circles for you. I also made a loader for the generated JSON with the Jackson library. This may not be the answer for you since you'd have to implement box2d. But here's how how I did it anyway.
/**
* Adds all the fixtures defined in jsonPath with the name'lookupName', and
* attach them to the 'body' with the properties defined in 'fixtureDef'.
* Then converts to the proper scale with 'width'.
*
* #param body the body to attach fixtures to
* #param fixtureDef the fixture's properties
* #param jsonPath the path to the collision shapes definition file
* #param lookupName the name to find in jsonPath json file
* #param width the width of the sprite, used to scale fixtures and find origin.
* #param height the height of the sprite, used to find origin.
*/
public void addFixtures(Body body, FixtureDef fixtureDef, String jsonPath, String lookupName, float width, float height) {
JsonNode collisionShapes = null;
try {
collisionShapes = json.readTree(Gdx.files.internal(jsonPath).readString());
} catch (JsonProcessingException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
for (JsonNode node : collisionShapes.findPath("rigidBodies")) {
if (node.path("name").asText().equals(lookupName)) {
Array<PolygonShape> polyShapes = new Array<PolygonShape>();
Array<CircleShape> circleShapes = new Array<CircleShape>();
for (JsonNode polygon : node.findPath("polygons")) {
Array<Vector2> vertices = new Array<Vector2>(Vector2.class);
for (JsonNode vector : polygon) {
vertices.add(new Vector2(
(float)vector.path("x").asDouble() * width,
(float)vector.path("y").asDouble() * width)
.sub(width/2, height/2));
}
polyShapes.add(new PolygonShape());
polyShapes.peek().set(vertices.toArray());
}
for (final JsonNode circle : node.findPath("circles")) {
circleShapes.add(new CircleShape());
circleShapes.peek().setPosition(new Vector2(
(float)circle.path("cx").asDouble() * width,
(float)circle.path("cy").asDouble() * width)
.sub(width/2, height/2));
circleShapes.peek().setRadius((float)circle.path("r").asDouble() * width);
}
for (PolygonShape shape : polyShapes) {
Vector2 vectors[] = new Vector2[shape.getVertexCount()];
for (int i = 0; i < shape.getVertexCount(); i++) {
vectors[i] = new Vector2();
shape.getVertex(i, vectors[i]);
}
shape.set(vectors);
fixtureDef.shape = shape;
body.createFixture(fixtureDef);
}
for (CircleShape shape : circleShapes) {
fixtureDef.shape = shape;
body.createFixture(fixtureDef);
}
}
}
}
And I would call it like this:
physics.addFixtures(body, fixtureDef, "ship/collision_shapes.json", shipType, width, height);
Then for collision detection:
public ContactListener shipsExplode() {
ContactListener listener = new ContactListener() {
#Override
public void beginContact(Contact contact) {
Body bodyA = contact.getFixtureA().getBody();
Body bodyB = contact.getFixtureB().getBody();
for (Ship ship : ships) {
if (ship.body == bodyA) {
ship.setExplode();
}
if (ship.body == bodyB) {
ship.setExplode();
}
}
}
};
return listener;
}
then you would add the listener to the world:
world.setContactListener(physics.shipsExplode());
my sprites' width and height were small since you're dealing in meters not pixels once you start using box2d. One sprite height was 0.8f and width was 1.2f for example. If you made the sprites width and height in pixels the physics engine hits speed limits that are built in http://www.iforce2d.net/b2dtut/gotchas
Don't know if this still matter to you guys, but I built a small python script that returns the pixels positions of the points in the edges of the image. There is room to improve the script, but for me, for now its ok...
from PIL import Image, ImageFilter
filename = "dship1"
image = Image.open(filename + ".png")
image = image.filter(ImageFilter.FIND_EDGES)
image.save(filename + "_edge.png")
cols = image.width
rows = image.height
points = []
w = 1
h = 1
i = 0
for pixel in list(image.getdata()):
if pixel[3] > 0:
points.append((w, h))
if i == cols:
w = 0
i = 0
h += 1
w += 1
i += 1
with open(filename + "_points.txt", "wb") as nf:
nf.write(',\n'.join('%s, %s' % x for x in points))
In case of updates you can find them here: export positions

Performance problems with scenekit

I've got a row dimensional array of values that I want to visualize in 3D and I'm using scene kit under OS X for it. I've done it in a clumsy manner by using each column as a point on the X axis, each row as a point on the Z axis, and each value as a normalized point on the Y axis -- I place a sphere at the vector defined by each data point. It works but it doesn't look too good.
I've also done this by building a mesh of lines based on #Matthew's function in Drawing a line between two points using SceneKit (the answer he posted, not the original question). For each point I use his function to draw two lines - one between my current point and the next point to the right and another between my current point and the next point towards the front (except when there is no additional column/row, of course).
Using the second method, my results look much better... however the performance is quite hideous! It takes quite a long time to complete the initial rendering, and if I use a trackpad/mouse to rotate or translate the scene, I might as well get a cup of coffee to wait until my system is usable again (and this is not much hyperbole). Using the sphere method, things render and update very quickly.
Any advice on how to improve the performance when using the lines method? (Note that I am not trying to add both lines and spheres at the same time.) Code-wise, the only difference between approach is which of the following methods gets called (and that for each point, addPixelAt... is called once, but addLineAt... is called twice for most points).
- (SCNNode *)addPixelAtRow:(CGFloat)row Column:(CGFloat)column size:(CGFloat)size color:(NSColor *)color
{
CGFloat radius = 0.5;
SCNSphere *ball = [SCNSphere sphereWithRadius:radius*1.5];
SCNMaterial *material = [SCNMaterial material];
[[material diffuse] setContents:color];
[[material specular] setContents:color];
[ball setMaterials:#[material]];
SCNNode *ballNode = [SCNNode nodeWithGeometry:ball];
[ballNode setPosition:SCNVector3Make(column, size, row)];
[_baseNode addChildNode:ballNode];
return ballNode;
}
- (SCNNode *)addLineFromRow:(CGFloat)row1 Column:(CGFloat)column1 size:(CGFloat)size1
toRow2:(CGFloat)row2 Column2:(CGFloat)column2 size2:(CGFloat)size2 color:(NSColor *)color
{
SCNVector3 positions[] = {
SCNVector3Make(column1, size1, row1),
SCNVector3Make(column2, size2, row2)
};
int indices[] = {0, 1};
SCNGeometrySource *vertexSource = [SCNGeometrySource geometrySourceWithVertices:positions count:2];
NSData *indexData = [NSData dataWithBytes:indices length:sizeof(indices)];
SCNGeometryElement *element = [SCNGeometryElement geometryElementWithData:indexData
primitiveType:SCNGeometryPrimitiveTypeLine
primitiveCount:1
bytesPerIndex:sizeof(int)];
SCNGeometry *line = [SCNGeometry geometryWithSources:#[vertexSource] elements:#[element]];
SCNMaterial *material = [SCNMaterial material];
[[material diffuse] setContents:color];
[[material specular] setContents:color];
[line setMaterials:#[material]];
SCNNode *lineNode = [SCNNode nodeWithGeometry:line];
[_baseNode addChildNode:lineNode];
return lineNode;
}
From the data that you've shown in your question I would say that your main problem is the number of draw calls. Your's is in the tens of thousands, which is way too much. It should probably be a lot closer to ~100.
The reason why you have so many draw calls is that you have so many distinct objects in your scene (each line). The better (but more advanced solution) would probably be to generate a single element for the entire mesh that consists of all the lines. If you want to achieve the same rendering with that mesh (with a color from cold to warm based on the height) then you could do that in a shader modifier.
However, in your case I would start by flattening all the lines (since that would be the smallest code change and should still have a significant performance improvement in your case).
(Optimizing performance is always an iterative process. Once you fix one thing there will be another thing which is the most expensive operation. Without your code I can only say what would help with the current performance problem)
Create an empty node (without adding it to your scene) and generate all the lines, adding them to this node. Then create a flattened copy of that node by calling flattenedClone on the node that contains all the lines
SCNNode *nodeWithAllTheLines = [SCNNode node];
// create all the lines and add them to it...
SCNNode *flattenedNode = [nodeWithAllTheLines flattenedClone];
[_baseNode addChildNode:flattenedNode];
When you do this you should see a significant drop in the number of draw calls (the number after the diamond in the statistics) and hopefully a big increase in performance.

Creating THREE.Line's with different endpoints using THREE.BufferGeometry

I am creating several THREE.Lines using THREE.BufferGeometry. Initially my app had them all starting at the origin and things worked as expected. Now, I would like to be able to start (and end) them at any point.
This fiddle (http://jsfiddle.net/9nVqU/) illustrates (I hope) how changing one end of the line away from the origin causes unexpected results.
I wondered if it was because any given line follows on from the previous one - switching the start/end order didn't change anything though so if that were true, I'd expect it to break.
Maybe I have the arrays set up incorrectly or the attributes that tell THREE.js how to interpret it - I think I need 2 * 3 verts for each line but changes I made to buffer_geometry.attributes = { seemed to make things worse.
FWIW, the actual effect I'm trying to achieve is to selectively turn on and off the lines based on user input. I can do that already by changing the end position but then I lose that value and I don't want to store it elsewhere. I thought that I could move the start point to the end point to switch it off and then move the start point to the origin again to re-enable it. If there is a way to enable/disable lines individually with BufferGeometry, then that would clearly be better.
First of all, you would have to do this:
var line = new THREE.Line( buffer_geometry, material );
line.type = THREE.LinePieces;
Second, this is not supported in r.58 , but should be.
As a work-around, you can hack WebGLRenderer.renderBufferDirect() like so:
// render lines
setLineWidth( material.linewidth );
var position = geometryAttributes[ "position" ];
primitives = ( object.type === THREE.LineStrip ) ? _gl.LINE_STRIP : _gl.LINES;
_gl.drawArrays( primitives, 0, position.numItems / 3 );
_this.info.render.calls ++;
_this.info.render.points += position.numItems;
three.js r.58

dojo.gfx matrix transformation

Matrix transformations has got my head spinning. I've got a dojox.gfx.group which I want to be draggable with Mover and then be able to rotate it around a certain point on the surface. My basic code looks like this:
this.m = dojox.gfx.matrix,
.
.
.
updateMatrix: function(){
var mtx = this.group._getRealMatrix();
var trans_m = this.m.translate(mtx.dx, mtx.dy);
this.group.setTransform([this.m.rotateAt(this.rotation, 0, 0), trans_m]);
}
The rotation point is at (0,0) just to keep things simple. I don't seem to understand how the group is being rotated.
Any reference to simplistic tutorial on matrix transformations would also help. The ones I've checked out haven't help too much.
Try the official dojox.gfx matrix tutorial. See if the official documentation helps.
The official documentation is where my head started spinning. Been staring at that for quite a long time because I couldn't make out how to feed the new coordinates into upcoming matrix transformations.
I've finally managed to figure out the problem though. It was a matter of connecting a listener to when the Mover triggers onMoveStop:
dojo.connect(movable, "onMoveStop", map, "reposition");
I then get the new moved distances and feed them into any rotation or scaling matrix translations in my graphic class:
updateMatrix: function(){
//So far it is the group which is being rotated
if (this.group) {
if(!this.curr_matrix){
this.curr_matrix = this.initial_matrix;
}
this.group.setTransform([
this.m.rotateAt(this.rotation, this.stage_w_2, this.stage_h_2),
this.m.scaleAt(this.scaling, this.stage_w_2, this.stage_h_2),
this.curr_matrix
]);
//this.group.setTransform([
// this.m.rotateAt(this.rotation, mid_x, mid_y),
// this.m.scaleAt(this.scaling, mid_x, mid_y),
// this.initial_matrix]);
}
},
reposition: function(){
mtx = this.group._getRealMatrix();
this.curr_matrix = this.m.translate(mtx.dx, mtx.dy);
},
Life's dandy again. Thanks Eugene for the suggestions.

Resources