I got a helm, sword and a shield which use 1 texture each, so 3 draw calls. I want to get them to use a single texture to get the draw call down to 1, but not combining them into 1 mesh as i need to disable any of them randomly, plus the sword and shield's position can change when attacking or dropped to ground. Is it doable?
If so how? I'm new to this, thanks.
To save on draw calls, you can use the same material for all three objects without combining their meshes. Then you create a texture file that has the three textures next to each other, and edit the UV maps for the models to use their own parts of the combined texture.
It's possible to do, and requires what are called Texture Atlases. I believe this is often done as an optimization step with the frequently used smaller textures that comprise a scene.
I don't think the free version of Unity has built in support for this (I might be wrong in assuming that the Pro version does natively support), but I believe there are also plugins - a quick google search found "Texture Packer" that appears to do what you want with the paid version being $15, but there's a free version too, so worth a closer look: http://forum.unity3d.com/threads/texture-packer-unity-tutorial.184596/
I don't have experience with any of these yet as I'm not at a stage where I'm trying to do this with my project, but when I get there I think Texture Packer is where I'll start.
Thanks,
Greg
Related
Is there a way to make a mesh unprintable with a 3D printer, but still viewable with three.js.
Motivation is that I want to show users a preview of a mesh before he can buy it. But as the JS code is viewable he could download it without paying for it. Degrading the quality of the preview mesh would be a way, but as the quality of the mesh is a selling point I would like to avoid that.
My idea was to add some kind of triangulation defects which would prevent the printing of the mesh, but which would not prevent threejs from showing the mesh.
Tools like Netfabb or Meshlab should also not be able to automatically repair the mesh.
Is there something like a bad sector copy protection equivalent for 3d models?
Just a few ideas.
1) Augment your shaders to ignore some interval of vertices from the buffer (like every 3rd or something). In this way you can add "garbage" to the model file so it can not be lifted easily from the network.
2) Once in the buffer it can still be pulled out with a savvy user, unless you split the model up into many chunks and render out of order or only render the front half of the model making it less useful for 3D printing. One could also render in split views or using stereoscopic interlaced with a separation of zero.
3) Only render a none symmetrical half of your model with an camera control locked to that half :P
Kinda wonky, a ton of work to implement, and still someone will find a way I'm sure. But that's my two cents worth anyway, hope it helps.
I've seen some online shops preview with renders taken from each 10-30 degrees around the model. That way you only pass the resulting image, not the model.
why not show a detailed HD video of your model?
If the mesh is non-manifold it will not print.
a) Render serverside, stream results in an interactive video
b) destroy the mesh while still keeping the normals intact for shading. You can randomly flip faces, render with double sided. You can "extrude" edges to mess up topology. As long as you map the normals correctly, it will shade without any of these defects affecting it.
Three.JS noob here trying to do 2d visualization.
I used d3.js to make an interactive visualization involving thousands of nodes (rectangle shaped). Needless to say there were performance issues during animation because Browsers have to create an svg DOM element for every one of those 10 thousand nodes.
I wish to recreate the same visualization using WebGl in order to leverage hardware acceleration.
Now ThreeJS is a library which I have choosen because of its popularity (btw, I did look at PixiJS and its api didn't appeal to me). I am wanting to know what is the best approach to do 2d graphics in three.js.
I tried creating one PlaneGeometry for every rectangle. But it seems that 10 thousand Plane geometries are not the say to go (animation becomes super duper slow).
I am probably missing something. I just need to know what is the best primitive way to create 2d rectangles and still identify them uniquely so that I can interact with them once drawn.
Thanks for any help.
EDIT: Would you guys suggest to use another library by any chance?
I think you're on the right track with looking at WebGL, but depending on what you're doing in your visualization you might need to get closer to the metal than "out of the box" threejs.
I recommend taking a look at GLSL and taking a look at how you can implement your visualization using vertex and fragment shaders. You can still use threejs for a lot of the WebGL plumbing.
The reason you'll probably need to get directly into GLSL shader work is because you want to take most of the poly manipulation logic out of javascript, at least as much as is possible. Any time you ask js to do a tight loop over tens of thousands of polys to update position, etc... you are going to struggle with CPU usage.
It is going to be much more performant to have js pass in data parameters to your shaders and let the vertex manipulation happen there.
Take a look here: http://www.html5rocks.com/en/tutorials/webgl/shaders/ for a nice shader tutorial.
I used ReconstructMe
to scan my first half body (arm and head). The result I got is a 3d mesh. I open them in 3dsmax. What I need to do now is to add animation/motion to the 3d arm and head.
I think ReconstructMe created a mesh. Do I need to convert that mesh to a 3d object before adding animation? If so, how to do it?
Do I need to seperate the head and arm to add different animation to them? How to do it?
I am a beginner in 3ds max. I am using 3ds max 2012, student edition.
Typically you would set up bones, and link the mesh to the bones with skin or physique modifier, then animate the bones as needed.
You can have 1 mesh, or separate meshes, depends on your needs.
For setting up the rigging, it would be good to utilize a tutorial like this
http://www.digitaltutors.com/11/training.php?pid=332
I find Digital Tutors to be very concise and detailed for anybody to grasp the concepts if your patient enough. Depending on the motion you will like some parts of the bones will require FK (forward kinematics) or IK (inverse kinematics) or a mixture of both FK/IK control in areas like the elbows of the arms etc.
Certain other parts of the character would also like the ability to utilize CAT controls. Through the whole rigging process the biggest foundation or theory to maintain is hierarchy and the process of parenting the controls/linking correctly.
Also your meshes topo needs to be correct, when scanning from an outside source you will get either a. a lot of triangles or b. bad edge flow, before the rigging process make sure to take the time to get your scan's topology to the correct state it should be in.
Given a set of 2d images that cover all dimensions of an object (e.g. a car and its roof/sides/front/read), how could I transform this into a 3d objdct?
Is there any libraries that could do this?
Thanks
These "2D images" are usually called "textures". You probably want a 3D library which allows you to specify a 3D model with bitmap textures. The library would depend on platform you are using, but start with looking at OpenGL!
OpenGL for PHP
OpenGL for Java
... etc.
I've heard of the program "Poser" doing this using heuristics for human forms, but otherwise I don't believe this is actually theoretically possible. You are asking to construct volumetric data from flat data (inferring the third dimension.)
I think you'd have to make a ton of assumptions about your geometry, and even then, you'd only really have a shell of the object. If you did this well, you'd have a contiguous surface representing the boundary of the object - not a volumetric object itself.
What you can do, like Tomas suggested, is slap these 2d images onto something. However, you still will need to construct a triangle mesh surface, and actually do all the modeling, for this to present a 3D surface.
I hope this helps.
What there is currently that can do anything close to what you are asking for automagically is extremely proprietary. No libraries, but there are some products.
This core issue is matching corresponding points in the images and being able to say, this spot in image A is this spot in image B, and they both match this spot in image C, etc.
There are three ways to go about this, manually matching (you have the photos and have to use your own brain to find the corresponding points), coded targets, and texture matching.
PhotoModeller, www.photomodeller.com, $1,145.00US, supports manual matching and coded targets. You print out a bunch of images, attach them to your object, shoot your photos, and the software finds the targets in each picture and creates a 3D object based on those points.
PhotoModeller Scanner, $2,595.00US, adds texture matching. Tiny bits of the the images are compared to see if they represent the same source area.
Both PhotoModeller products depend on shooting the images with a calibrated camera where you use a consistent focal length for every shot and you got through a calibration process to map the lens distortion of the camera.
If you can do manual matching, the Match Photo feature of Google SketchUp may do the job, and SketchUp is free. If you can shoot new photos, you can add your own targets like colored sticker dots to the object to help you generate contours.
If your images are drawings, like profile, plan view, etc. PhotoModeller will not help you, but SketchUp may be just the tool you need. You will have to build up each part manually because you will have to supply the intelligence to recognize which lines and points correspond from drawing to drawing.
I hope this helps.
is it possible to construct a 3d model of a still object if various images along with depth data was gathered from various angles, what I was thinking was have a sort of a circular conveyor belt where a kinect would be placed and the conveyor belt while the real object that is to be reconstructed in 3d space sits in the middle. The conveyor belt thereafter rotates around the image in a circle and lots of images are captured (perhaps 10 image per second) which would allow the kinect to catch an image from every angle including the depth data, theoretically this is possible. The model would also have to be recreated with the textures.
What I would like to know is whether there are any similar projects/software already available and any links would be appreciated
Whether this is possible within perhaps 6 months
How would I proceed to do this? Such as any similar algorithm you could point me to and such
Thanks,
MilindaD
It is definitely possible and there are a lot of 3D scanners which work out there, with more or less the same principle of stereoscopy.
You probably know this, but just to contextualize: The idea is to get two images from the same point and to use triangulation to compute the 3d coordinates of the point in your scene. Although this is quite easy, the big issue is to find the correspondence between the points in your 2 images, and this is where you need a good software to extract and recognize similar points.
There is an open-source project called Meshlab for 3d vision, which includes 3d reconstruction* algorithms. I don't know the details of the algorithms, but the software is definitely a good entrance point if you want to play with 3d.
I used to know some other ones, I will try to find them and add them here:
Insight3d
(*Wiki page has no content, redirects to login for editing)
Check out https://bitbucket.org/tobin/kinect-point-cloud-demo/overview which is a code sample for the Kinect for Windows SDK that does specifically this. Currently it uses the bitmaps captured by the depth sensor, and iterates through the byte array to create a point cloud in a PLY format that can read by MeshLab. The next stage of us is to apply/refine a delanunay triangle algoirthim to form a mesh instead of points, which a texture can be applied. A third stage would then me a mesh merging formula to combine multiple caputres from the Kinect to form a full 3D object mesh.
This is based on some work I done in June using Kinect for the purposes of 3D printing capture.
The .NET code in this source code repository will however get you started with what you want to achieve.
Autodesk has a piece of software that will do what you are asking for it is called "Photofly". It is currently in the labs section. Using a series of images taken from multiple angles the 3d geometry is created and then photo mapped with your images to create the scene.
If you interested more in theoretical (i mean if you want to know how) part of this problem,
here is some document from Microsoft Research about moving depth camera and 3D reconstruction.
Try out VisualSfM (http://ccwu.me/vsfm/) by Changchang Wu (http://ccwu.me/)
It takes multiple images from different angles of the scene and outputs a 3D point cloud.
The algorithm is called "Structure from Motion".
Brief idea of the algorithm : It involves extracting feature points in each image; finding correspondences between them across images; building feature tracks, estimating camera matrices and thereby the 3D coordinates of the feature points.