How to (box) unwrap a model in three.js on the fly? - three.js

My game has a very simplistic retro pixel style, where all the models use flat mapping (box unwrap) for the models. The unwrap is always the same process in my modeling program: selecting a box unwrap modifier with the same settings.
This gets tedious as I need to explain other people how to unwrap and we all make mistakes sometimes or forget to unwrap some part of a mesh, requiring a full re-export.
It would be better if I could code this somehow, so other people don't have to mess around with the UV's and can just focus on the model. The model gets materials assigned automatically in-game, just the UV's should ideally be generated on the fly when I load the models in three.js.
Any ideas?

maybe check this link:
https://github.com/mrdoob/three.js/issues/2065#issuecomment-6352320
this is about planar mapping and only works for Three.Face3, triangles but it is very easy to add the fourth UV-coordinate. If you got this working with the right scale based on your object, you can then iterate over all of your object's face normals and check the side they are most facing to. And then, you do this planar mapping algorithm for every side and voila, box mapping :) Hope this helps!

Related

Large lagging on mouse movement with SketchUp Dae model

I’ve designed a 3D model in SketchUp and I didn’t use any texture. I’m faced with an issue related with lagging on mouse move and rotate process. When I exported the model by Dae format and imported to the three js online editor (three js online editor) mouse movement is being very slow. I think it occurs fps drop. I couldn’t understand what’s problem with my model that I designed. I need your suggestions and ideas how to resolve this issue. Thanks for your support. I’ve uploaded 3D model’s image. Please take a look.
Object Count: 98.349, Vertices: 2,107.656, Triangles: 702.552
Object Count: 98.349,
The object count results in an equal number draw calls. Such a high value will degrade the performance no matter how complex the respective geometry eventually is.
I suggest you redesign the model and ensure to merge individual objects as much as possible. Also try to lower the number of vertices and faces.
Keep in mind that three.js does not automatically merge or batch render items. So it's your responsibility to optimize assets for rendering. It's best to do this right when designing the model. Or in code via methods like BufferGeometryUtils.mergeBufferGeometries() or via instanced rendering.

Is there a way to create simple animations "on the fly" in modern OpenGL?

I think this requires a bit of background information:
I have been modding Minecraft for a while now, but I alway wanted to make my own game, so I started digging into the freshly released LWJGL3 to actually get things done. Yes, I know it's a bit ow level and I should use an engine and stuff...indeed, I already tried some engines and they never quite match what I want to do, so I decided I want to tackle the problem at its root.
So far, I kind of understand how to render meshes, move the "camera", etc. and I'm willing to take the learning curve.
But the thing is, at some point all the tutorials start to explain how to load models and create skeletal animations and so on...but I think I do not really want to go that way. A lot of things in working with Minecraft code was awful, but I liked how I could create models and animations from Java code. Sure, it did not look super realistic, but since I'm not great with Blender either, I doubt having "classic" models and animations would help. Anyway, in that code, I could rotate a box around to make a creature look at a player, I could use a sinus function to move legs and arms (or wings, in my case) and that was working, since Minecraft used immediate mode and Java could directly tell the graphics card where to draw each vertex.
So, actual question(s): Is there any good way to make dynamic animations in modern (3.3+) OpenGL? My models would basically be a hierarchy of shapes (boxes or whatever) and I want to be able to rotate them on the fly. But I'm not sure how to organize that. Would I store all the translation/rotation-matrices for each sub-shape? Would that put a hard limit on the amount of sub-shapes a model could have? Did anyone try something like that?
Edit: For clarification, what I did looked something like this:
Create a model: https://github.com/TheOnlySilverClaw/Birdmod/blob/master/src/main/java/silverclaw/birds/client/model/ModelOstrich.java
The model is created as a bunch of boxes in the constructor, the render and setRotationAngles methods set scale and rotations.
You should follow one opengl tutorial in order to understand the basics.
Let me suggest "Learning Modern 3D Graphics Programming", and especially this chapter, where you move one robot arm with multiple joints.
I did a port in java using jogl here, but you can easily port it over lwjgl.
What you are looking for is exactly skeletal animation, the only difference being the fact you do not want to load animations for your bones but want to compute / generate transforms on the fly.
You basically have a hierarchy of bones, and geometry attached to it. It looks like you want to manipulate this geometry "rigidly", so before sending your meshes / transforms to the GPU (the classic way), you want to start by computing the new transforms in model or world space, then send those freshly computed matrices to draw your geometries on the gpu the standard way.
As Sorin said, to compute each transform you simply have to iterate over your hierarchy and accumulate transforms given the transform of the parent bone and your local transform w.r.t the parent.
Yes and no.
You can have your hierarchy of shapes and store a relative transform for each.
For example the "player" whould have a translation to 100,100, 10 (where the player is), and then the "head" subcomponent would have an additional translation of 0,0,5 (just a bit higher on the z axis).
You can store these as matrices (they can encode translation, roation and scaling) and use glPushMatrix and glPop matrix to add and remove a matrix to a stack maintained by openGL.
The draw() function(or whatever you call it) should look something like :
glPushMatrix();
glMultMatrix(my_transform); // You can also just have glTranslate, glRotate or anything else.
// Draw my mesh
for (child : children) { child.draw(); }
glPopMatrix();
This gives you a hierarchical setup so that objects move with their parent. Alternatively you can have a stack in the main memory and do the multiplications yourself (use a library). I think the openGL stack may have a limit (implementation dependent), but if you handle it yourself the only limit is the amount of ram you can use. Once all the matrices are multiplied rendering is done in the same amount of time, that is it doesn't matter for performance how deep a mesh is in the hierarchy.
For actual animations you need to compute the intermediate transformations. For example for a crouch animation you probably want to have a few frames in between so that the camera doesn't just jump to the low position. You can do this with a time based linear interpolation between the start and end positions, but this only covers simple animations and you still have to implement it yourself.
Anything more complicated (i.e. modify the mesh based on the bone links) you would need to implement yourself.

I don't fully understand D2D1_FIGURE_BEGIN: why is it needed, what's the difference, and why does Microsoft's sample code mismatch types anyway?

I'm reading up on Direct2D before I migrate my GDI code to it, and I'm trying to figure out how paths work. I understand most of the work involved with geometries and geometry sinks, but there's one thing I don't understand: the D2D1_FIGURE_BEGIN type and its parameter to BeginFigure().
First, why is this value even needed? Why does a geometry need to know if it's filled or hollow ahead of time? I don't know nay other drawing API which cares about whether path objects are filled or not ahead of time; you just define the endpoints of the shapes and then call fill() or stroke() to draw your path, so how are geometries any different?
And if this parameter is necessary, how does choosing one value over the other affect the shapes I draw in?
Finally, if I understand the usage of this enumeration correctly, you're supposed to only use filled paths with FillGeometry() and hollow paths with DrawGeometry(). However, the hourglass example here and cited by several method documentation pages (like the BeginFigure() one) creates a filled figure and draws it with both DrawGeometry() and FillGeometry()! Is this undefined behavior? Does it have anything to do with the blue border around the gradient in the example picture, which I don't see anywhere in the code?
Thanks.
EDIT Okay I think I understand what's going on with the gradient's weird outline: the gradient is also transitioning alpha values, and the fill is overlapping the stroke because the stroke is centered on the line, and the fill is drawn after the stroke. That still doesn't explain why I can fill and stroke with a filled geometry, or what the difference between hollow and filled geometries are...
Also I just realized that hollow geometries are documented as not having bounds. Does this mean that hollow geometries are purely an optimization for stroke-only geometries and otherwise behave identically to a filled geometry?
If you want to better understand Direct2D's geometry system, I recommend studying the WPF geometry system. WPF, XPS, Direct2D, Silverlight, and the newer "XAML" frameworks all use the same building blocks (the same "language", if you will). I found it easier to understand the declarative object-oriented API in WPF, and after that it was a breeze to work with the imperative API in Direct2D. You can think of WPF's mutable geometry system as an implementation of the "builder" pattern from Java, where the build() method is behind the scenes (hidden from you) and spits out an immutable Direct2D geometry when it comes time to render things on-screen (WPF uses something called "MIL", which IIRC/AFAICT, Direct2D was forked from. They really are the same thing!) It is also straightforward to write code that converts between the two representations, e.g. walking a WPF PathGeometry and streaming it into a Direct2D geometry sink, and you can also use ID2D1PathGeometry::Stream and a custom ID2D1GeometrySink implementation to reconstitute a WPF PathGeometry.
(BTW this is not theoretical :) It's exactly what I do in Paint.NET 4.0+: I use a WPF-esque declarative, mutable object model that spits out immutable Direct2D geometries at render time. It works really well!)
Okay, anyway, to get directly to your specific question: BeginFigure() and D2D1_FIGURE_BEGIN map directly to the PathFigure.IsFilled property in WPF. In order to get an intuitive understanding of what effect this has, you can use something like KaXAML to play around with some geometries from WPF or Silverlight samples and see what the results look like. And the documentation is definitely better for WPF and Silverlight than for Direct2D.
Another key concept is that DrawGeometry is basically a helper method. You can accomplish the same thing by first widening your geometry with ID2D1Geometry::Widen and then using FillGeometry ("widening" seems like a misnomer to me, btw: in Photoshop or Illustrator you'd probably use a verb like "stroke"). That's not to say that either one always performs better/worse ... be sure to benchmark. I've seen it go both ways. The reason you can think of this as a helper method is dependent on the fact that the lowest level of the rasterization engine can only do one thing: fill a triangle. All other drawing "primitives" must be converted to triangle lists or strips (this is also why ID2D1Mesh is so fast: it bypasses all sorts of processing code!). Filling a geometry requires tessellation of its interior to a list of triangle strips which can then be filled by Direct3D. "Drawing" a geometry requires applying a stroke (width and/or style): even a simple 1-pixel wide straight line must be first converted to 2 filled triangles.
Oh, also, if you want to compute the "real" bounds of a geometry with hollow figures, use ID2D1Geometry::GetWidenedBounds with a strokeWidth of zero. This is a discrepancy between Direct2D and WPF that puzzles me. Geometry.Bounds (in WPF) is equivalent to ID2D1Geometry::GetWidenedBounds(0.0f).

Make a mesh unprintable, but still viewable with three.js

Is there a way to make a mesh unprintable with a 3D printer, but still viewable with three.js.
Motivation is that I want to show users a preview of a mesh before he can buy it. But as the JS code is viewable he could download it without paying for it. Degrading the quality of the preview mesh would be a way, but as the quality of the mesh is a selling point I would like to avoid that.
My idea was to add some kind of triangulation defects which would prevent the printing of the mesh, but which would not prevent threejs from showing the mesh.
Tools like Netfabb or Meshlab should also not be able to automatically repair the mesh.
Is there something like a bad sector copy protection equivalent for 3d models?
Just a few ideas.
1) Augment your shaders to ignore some interval of vertices from the buffer (like every 3rd or something). In this way you can add "garbage" to the model file so it can not be lifted easily from the network.
2) Once in the buffer it can still be pulled out with a savvy user, unless you split the model up into many chunks and render out of order or only render the front half of the model making it less useful for 3D printing. One could also render in split views or using stereoscopic interlaced with a separation of zero.
3) Only render a none symmetrical half of your model with an camera control locked to that half :P
Kinda wonky, a ton of work to implement, and still someone will find a way I'm sure. But that's my two cents worth anyway, hope it helps.
I've seen some online shops preview with renders taken from each 10-30 degrees around the model. That way you only pass the resulting image, not the model.
why not show a detailed HD video of your model?
If the mesh is non-manifold it will not print.
a) Render serverside, stream results in an interactive video
b) destroy the mesh while still keeping the normals intact for shading. You can randomly flip faces, render with double sided. You can "extrude" edges to mess up topology. As long as you map the normals correctly, it will shade without any of these defects affecting it.

Animate 3D object disassembly in Unity3D

my question is just about choosing the right approach because i'm not sure about the solution.
i got 3d model in my project, at some point i want to show animated disassembly , the object is made of somthing like 200 pieces.
so animating with keyframe one by one is time consuming.
the animation i'm looking for is like explosion from the center of the object so the parts will just move out of its center.
example image:
what would you do?
what is the best way to manage such task?
I would code it. Maybe I am biased because I am a programmer, but animating it would be a pain.
So I would import the model into Unity3d. Then I would grab all the parts and store them in a list. Once I have the 200 parts then I can do anything I want to them.
I would then proceed to attach rigibodies and box colliders to them all -- this can be done programmatically. Then you can initiate the explosion by adding a velocity to each part. If you want to be fairly realistic and have something that is fairly random you can give each object mass and then use the equation F=ma for the explosion. That is, each part will get different acceleration depending on the mass they have.

Resources