I wish to construct an animation in Paraview starting from some files obtained in an optimization process. I have a mesh made of tetrahedrons and at each iteration I have a scalar field on this mesh.
I could create a VTK file for each iteration, but such a file is larger than 100Mb and it would take more than 15GB to store all the vtk files. Moreover, the geometry part in each vtk file is the same, so I guess there is a more efficient solution. Therefore my question:
Is it possible to make animations in Paraview by changing a scalar field on a fixed geometry?
(if this is not the right forum to ask this, please let me know where it could be more appropriate)
Related
I have an stl file of a cylinder and an stl file of a sphere.
I want to use these two stl files to produce a third that is an stl of a ball with a hole through it.
The cylinder (the hole) has the same length as the diameter of the sphere.
So how do I use meshlab to 'reduce' the ball by the contents of the cylinder and produce a new object?
MeshLab has some boolean operations under the "CSG Operation" filter, however this resamples the meshes, which is probably not what you want. It is also prone to crashing.
Suggested alternatives are:
atomiccompiler.com : web site that can do (among other things) boolean operations on uploaded STLs, and provide a new STL for download. No need to install software. Downside is it limits file sizes.
Blender : can handle complex boolean operations fairly reliably, and also handles colors correctly. Steep learning curve for new users.
OpenSCAD : nice programmatic CAD tool but sometimes crashes when given large STLs.
I am mainly using EvaluateGlobalTransform to get animation from fbx files. This method works with the humanoid.fbx in the samples\ViewScene directory and another ascii format fbx model that I made in Blender.
However, when I export the same Blender model in binary format and try to get the animation from it, the result is totally wrong. The matrices of every frames that I got by calling EvaluateGlobalTransform are mostly same. Here are some snippets of the results.(it is too much to print all of them so I wrote them in a file)
The wrong one:
The right one:
I am sure that all the fbx files that I use contain at least one animation stack and can be animated perfectly if you open them in FBX Review.
It is worth mentioning that the size(not storage size but spacial size) of the model I made in Blender is somehow larger in binary format than in ascii format.
Please Help Me! Thanks!
It's me again. I think I have an answer to my own question. The reason why all matrices are same in binary fbx file but not same in ascii fbx file is that the two animations, which contains all the matrices, are not the same one. In binary file the default take is Idle Animation, whereas in ascii file the default take is Walking Animation. When I extracted matrices from the fbx files, I actually extracted Idle Animation from binary fbx file contrast to Walking Animation from ascii fbx file.
Therefore, I only need to find a way to change the default take from which I extract animation. I think I have solved my problem. I hope this can solve your problem too.
In my opengl app, I am drawing the same polygon approximately 50k times but at different points on the screen. In my current approach, I do the following:
Draw the polygon once into a display list
for each instance of the polygon, push the matrix, translate to that point, scale and rotate appropriate (the scaling of each point will be the same, the translation and rotation will not).
However, with 50k polygons, this is 50k push and pops and computations of the correct matrix translations to move to the correct point.
A coworker of mine also suggested drawing the entire scene into a buffer and then just drawing the whole buffer with a single translation. The tradeoff here is that we need to keep all of the polygon vertices in memory rather than just the display list, but we wouldn't need to do a push/translate/scale/rotate/pop for each vertex.
The first approach is the one we currently have implemented, and I would prefer to see if we can improve that since it would require major changes to do it the second way (however, if the second way is much faster, we can always do the rewrite).
Are all of these push/pops necessary? Is there a faster way to do this? And should I be concerned that this many push/pops will degrade performance?
It depends on your ultimate goal. More recent OpenGL specs enable features for "geometry instancing". You can load all the matrices into a buffer and then draw all 50k with a single "draw instances" call (OpenGL 3+). If you are looking for a temporary fix, at the very least, load the polygon into a Vertex Buffer Object. Display Lists are very old and deprecated.
Are these 50k polygons going to move independently? You'll have to put up with some form of "pushing/popping" (even though modern scene graphs do not necessarily use an explicit matrix stack). If the 50k polygons are static, you could pre-compile the entire scene into one VBO. That would make it render very fast.
If you can assume a recent version of OpenGL (>=3.1, IIRC) you might want to look at glDrawArraysInstanced and/or glDrawElementsInstanced. For older versions, you can probably use glDrawArraysInstancedEXT/`glDrawElementsInstancedEXT, but they're extensions, so you'll have to access them as such.
Either way, the general idea is fairly simple: you have one mesh, and multiple transforms specifying where to draw the mesh, then you step through and draw the mesh with the different transforms. Note, however, that this doesn't necessarily give a major improvement -- it depends on the implementation (even more than most things do).
Given a set of 2d images that cover all dimensions of an object (e.g. a car and its roof/sides/front/read), how could I transform this into a 3d objdct?
Is there any libraries that could do this?
Thanks
These "2D images" are usually called "textures". You probably want a 3D library which allows you to specify a 3D model with bitmap textures. The library would depend on platform you are using, but start with looking at OpenGL!
OpenGL for PHP
OpenGL for Java
... etc.
I've heard of the program "Poser" doing this using heuristics for human forms, but otherwise I don't believe this is actually theoretically possible. You are asking to construct volumetric data from flat data (inferring the third dimension.)
I think you'd have to make a ton of assumptions about your geometry, and even then, you'd only really have a shell of the object. If you did this well, you'd have a contiguous surface representing the boundary of the object - not a volumetric object itself.
What you can do, like Tomas suggested, is slap these 2d images onto something. However, you still will need to construct a triangle mesh surface, and actually do all the modeling, for this to present a 3D surface.
I hope this helps.
What there is currently that can do anything close to what you are asking for automagically is extremely proprietary. No libraries, but there are some products.
This core issue is matching corresponding points in the images and being able to say, this spot in image A is this spot in image B, and they both match this spot in image C, etc.
There are three ways to go about this, manually matching (you have the photos and have to use your own brain to find the corresponding points), coded targets, and texture matching.
PhotoModeller, www.photomodeller.com, $1,145.00US, supports manual matching and coded targets. You print out a bunch of images, attach them to your object, shoot your photos, and the software finds the targets in each picture and creates a 3D object based on those points.
PhotoModeller Scanner, $2,595.00US, adds texture matching. Tiny bits of the the images are compared to see if they represent the same source area.
Both PhotoModeller products depend on shooting the images with a calibrated camera where you use a consistent focal length for every shot and you got through a calibration process to map the lens distortion of the camera.
If you can do manual matching, the Match Photo feature of Google SketchUp may do the job, and SketchUp is free. If you can shoot new photos, you can add your own targets like colored sticker dots to the object to help you generate contours.
If your images are drawings, like profile, plan view, etc. PhotoModeller will not help you, but SketchUp may be just the tool you need. You will have to build up each part manually because you will have to supply the intelligence to recognize which lines and points correspond from drawing to drawing.
I hope this helps.
I am trying to find out if its possible at all to write a command line scene parser for 3ds max 2010.
I want to gather some information from the max scene without having to load up the Max studio. I have been informed that its not possible to access the Max API without starting the max studio.
Possible use of my program
C:\myparser.exe "myfile.max" > bonenames.txt
Any help/suggestions/hacks are greatly appreciated :)
Thanks
Most anything is possible with enough time, experience, and resources. But what you are suggesting is generally not feasible unless you:
Have full documentation on the binary file format of 3ds Max 2010, or
Need to extract an exceptionally small amount of information from the scene.
If you are only attempting to extract bone names from the file—and only for actual bone objects instead of arbitrary geometry used as a bone—there is a chance (albeit very slim) that creating many files that differ in very minor ways might allow you perform a binary diff and deduce some patterns from the contents.
For example, save an empty Max scene, then add one bone to it and save that, then rename the bone (using the same number of characters) and save that, then rename the bone to add one character and save that, then move the bone and save that, then add another bone and save that. Then try adding modifiers, or param blocks, or hiding the bone, or moving it to another layer, etc. etc. and see what you get. With luck there might be a sensible pattern among the layers of cruft that you can parse for yourself.