Writing a command line scene parser for 3DS Max 2010 - max

I am trying to find out if its possible at all to write a command line scene parser for 3ds max 2010.
I want to gather some information from the max scene without having to load up the Max studio. I have been informed that its not possible to access the Max API without starting the max studio.
Possible use of my program
C:\myparser.exe "myfile.max" > bonenames.txt
Any help/suggestions/hacks are greatly appreciated :)
Thanks

Most anything is possible with enough time, experience, and resources. But what you are suggesting is generally not feasible unless you:
Have full documentation on the binary file format of 3ds Max 2010, or
Need to extract an exceptionally small amount of information from the scene.
If you are only attempting to extract bone names from the file—and only for actual bone objects instead of arbitrary geometry used as a bone—there is a chance (albeit very slim) that creating many files that differ in very minor ways might allow you perform a binary diff and deduce some patterns from the contents.
For example, save an empty Max scene, then add one bone to it and save that, then rename the bone (using the same number of characters) and save that, then rename the bone to add one character and save that, then move the bone and save that, then add another bone and save that. Then try adding modifiers, or param blocks, or hiding the bone, or moving it to another layer, etc. etc. and see what you get. With luck there might be a sensible pattern among the layers of cruft that you can parse for yourself.

Related

Make Paraview animation by changing the data on a fixed geometry

I wish to construct an animation in Paraview starting from some files obtained in an optimization process. I have a mesh made of tetrahedrons and at each iteration I have a scalar field on this mesh.
I could create a VTK file for each iteration, but such a file is larger than 100Mb and it would take more than 15GB to store all the vtk files. Moreover, the geometry part in each vtk file is the same, so I guess there is a more efficient solution. Therefore my question:
Is it possible to make animations in Paraview by changing a scalar field on a fixed geometry?
(if this is not the right forum to ask this, please let me know where it could be more appropriate)

Grouping rectangles in iTextSharp

I have multiple rectangles and they all share the same spot color. Is there a way to merge / group them into one vector object so the generated pdf has smaller size?
If you are creating the document from scratch, then the answer is trivial: yes!
It's sufficient to draw all the paths of the rectangles that share the same spot color and then use the operator that fills, stroke or fills & strokes the paths.
If you are talking about optimizing an existing PDF document, you're in for some heavy programming. You would need to parse every content stream looking for rectangle operators (assuming that the rectangles aren't drawn using move-to and line-to operators), check where these shapes are filled and/or stroked, and then rearrange all these operators. This would require a lot of thought. I would know where to begin, but I can't predict where it would end. Maybe it would turn out that it makes more sense to define a single rectangle as a Form XObject and reuse that single external object, maybe not. It's hard to predict.
Moreover: you are talking about operators in a stream. These streams are compressed anyway, so you may be doing a lot of work to gain only a very small decrease in size.
I would say: what you are asking for may be possible, but it is unclear why you would do this, because it would result in only a limited decrease in file size.
If size is an issue, there may be other places where you are "wasting bytes" that could result in a more desirable result. I am very curious to hear why you think the rectangles using spot colors are the culprit. You are reusing the spot color instance, aren't you? If you are creating a new spot color instance for every rectangle you draw, you have found the real culprit and you can avoid having to group the rectangles.

Three.js How to increase canvas-text texture quality

What parameters, modes, tricks, etc can be applied to get sharpness for texts ?
I'm going to draw a lot so I cant use 3d text.
I'm using canvas to write the text and some symbols. I'm creating somethinbg like label information.
Thanks
This is no simple matter since you'll run into memory issues with 100k "font textures". Since you want 100k text elements you'll have several difficulties to manage. I had a similar problem too once and tossed together a few techniques in order to make it work. Simply put you need some sort of LOD ("Level of Detail") to make that work. That setup might look like following:
A THREE.ParticleSystem built up with BufferGeometry where every position is one text-position
One "highres" TextureAtlas with 256 images on it which you allocate dynamically with those images that are around you (4096px x 4096px with 256x256px images)
At least one "lowres" TextureAtlas where you have 16x16px images. You prepare that one beforehand. Same size like previous, but there you have all preview images of your text and every image is 16x16px in size.
A kdtree data structure to use a nearestneighbour algorithm with to figure out which positions are near the camera (alike http://threejs.org/examples/#webgl_nearestneighbour)
The sub-imaging module to continually replace highres textures with directly on the GPU: https://github.com/mrdoob/three.js/pull/4661
An index for every position to tell it which position on the TextureAtlas it should use for display
You see where I'm going. Here's some docs on my experiences:
The Stackoverflow post: Display many thousand images in three.js
The blog where I (begun) to explain what I was doing: http://blogs.fhnw.ch/threejs/
This way it will take quite some time until you have satisfying results. The only way to make this simpler is to get rid of the 16x16px preview images. But I wouldn't recommend that... Or of course something depending on your setup. Maybe you have levels? towns? Or any other structure where it would make sense to only display a portion of these texts? That might be worth a though before tackling the big thing.
If you plan to really work on this and make this happen the way I described I can help you with some already existing code and further explanations. Just tell me where you're heading :)

Camera calibration patterns

I would like to know if there is a process to generate camera calibration patterns.
We can use paint or any other graphic tool and set the precise measurements but then we need to hard-code the point positions or create a txt/xml file.
Is there a software that exports the data to a file that we can upload in our software.
What about 3D targets like boxes and/or cubes. Is there a method to generate the correct data points?
Cheers.
For 2D targets such as checkerboards, I used to do it like user469049 describes. Which was quite time consuming. In the end I gave up and created a web tool that does all of the leg work:
https://calib.io/pages/camera-calibration-pattern-generator
I'm using inkscape:
http://dominoc925.blogspot.co.uk/2012/06/create-camera-calibration-chess-board.html
I usually create a pdf file used to print and save files as LaTeX with PSTricks extensions.
The tex file has paths, so for a square it has a \moveto command to set the starting point and it has \line to command to set the next points.
In the dominoc925 example they define black and white squares but I just define the black squares to avoid repeated points.
I have a simple file loader in my code to get the points, just search for the \moveto and \line commands and workout the points from there.
For the 3D targets I treat each patter as one view because I don't have the tools to build a precise 3D target.
So instead of having different views of one patter like in the Matlab toolbox, I treat each detected pattern as a view.
In other words, if you have a 3D object then the target on each face is treated as a independent view.
There is probably a more professional way to do the job but this is my process :)
I hope this helps.

Object detection + segmentation

I 'm trying to find an efficient way of acceptable complexity to
detect an object in an image so I can isolate it from its surroundings
segment that object to its sub-parts and label them so I can then fetch them at will
It's been 3 weeks since I entered the image processing world and I've read about so many algorithms (sift, snakes, more snakes, fourier-related, etc.), and heuristics that I don't know where to start and which one is "best" for what I'm trying to achieve. Having in mind that the image dataset in interest is a pretty large one, I don't even know if I should use some algorithm implemented in OpenCV or if I should implement one my own.
Summarize:
Which methodology should I focus on? Why?
Should I use OpenCV for that kind of stuff or is there some other 'better' alternative?
Thank you in advance.
EDIT -- More info regarding the datasets
Each dataset consists of 80K images of products sharing the same
concept e.g. t-shirts, watches, shoes
size
orientation (90% of them)
background (95% of them)
All pictures in each datasets look almost identical apart from the product itself, apparently. To make things a little more clear, let's consider only the 'watch dataset':
All the pictures in the set look almost exactly like this:
(again, apart form the watch itself). I want to extract the strap and the dial. The thing is that there are lots of different watch styles and therefore shapes. From what I've read so far, I think I need a template algorithm that allows bending and stretching so as to be able to match straps and dials of different styles.
Instead of creating three distinct templates (upper part of strap, lower part of strap, dial), it would be reasonable to create only one and segment it into 3 parts. That way, I would be confident enough that each part was detected with respect to each other as intended to e.g. the dial would not be detected below the lower part of the strap.
From all the algorithms/methodologies I've encountered, active shape|appearance model seem to be the most promising ones. Unfortunately, I haven't managed to find a descent implementation and I'm not confident enough that that's the best approach so as to go ahead and write one myself.
If anyone could point out what I should be really looking for (algorithm/heuristic/library/etc.), I would be more than grateful. If again you think my description was a bit vague, feel free to ask for a more detailed one.
From what you've said, here are a few things that pop up at first glance:
Simplest thing to do it binarize the image and do Connected Components using OpenCV or CvBlob library. For simple images with non-complex background this usually yeilds objects
HOwever, looking at your sample image, texture-based segmentation techniques may work better - the watch dial, the straps and the background are wisely variant in texture/roughness, and this could be an ideal way to separate them.
The roughness of a portion can be easily found by the Eigen transform (explained a bit on SO, check the link to the research paper provided there), then the Mean Shift filter can be applied on the output of the Eigen transform. This will give regions clearly separated according to texture. Both the pyramidal Mean Shift and finding eigenvalues by SVD are implemented in OpenCV, so unless you can optimize your own code its better (and easier) to use inbuilt functions (if present) as far as speed and efficiency is concerned.
I think I would turn the problem around. Instead of hunting for the dial, I would use a set of robust features from the watch to 'stitch' the target image onto a template. The first watch has a set of squares in the dial that are white, the second watch has a number of white circles. I would per type of watch:
Segment out the squares or circles in the dial. Segmentation steps can be tricky as they are usually both scale and light dependent
Estimate the centers or corners of the above found feature areas. These are the new feature points.
Use the Hungarian algorithm to match features between the template watch and the target watch. Alternatively, one can take the surroundings of each feature point in the original image and match these using cross correlation
Use matching features between the template and the target to estimate scaling, rotation and translation
Stitch the image
As the image is now in a known form, one can extract the regions simply via pre set coordinates

Resources