I'm now working on collecting data from a kinect skeleton model which contains 23 3D-point/set at a time, and also I would like to convert them into .anim file so that I can load it into unity and make a character move.
Is there any solution to do so? To convert 3D data into .anim file?
P.S. I already have 3d-data stored in the format like [time|x|y|z].
If you're on Windows you can use Brekel's tools (have a look at this workflow tutorial)
If you can write a bit of code, you can also try this approach.
Related
Requirement:
I have to read dxf files entities which can be 2D building dimensions or road etc. Then i have to place it over the map and return the coordinates just like labs.mapbox.com export the coordinates as geojson like the data below export by labs.mapbox.
Approch: For now i'm using python 'exdxf' package to read dxf file which return me entities information e.g in case of line it would be start/end points. Then i was thinking to draw those entities over a canvas (not sure) then place it over mapbox and get the coordinates where canvas is place export geojson of it is the final goal.
Required help in: Suggest me the right way to achieve this solution, i am open to choose any framework / language.
Thanks for your time
If you've got a dxf file and want to export it as a geosjon file, using ezdxf python package is good but you have to do a lot of processing the dxf entities and stuff and it takes time to achieve what you want.
I suggest using ogr2ogr since this is a time saving approach. it is a library for working with geospatial data and you can convert data to different formats such as Geojson, shapefile and others.
you can easily convert your dxf file to a geojson file with :
ogr2ogr -f Geojson GEOJSON_FILE_NAME YOUR_DXF_FILE_NAME
I suggest reading the documentation
first to get to know about this library and all the options it gives you for processing the data.
I just started working with Processing, because I need to get a sequence of images, color and depth. When I save does images while drawing, so for each image I get I save it. I have around 2fps. Is there a way to improve this?
My thought was to store the image in an array list. I thought there is a function setup() so there would be also a function shutdown() or something. So When i hit the Esc button or close the window which is getting cold. Like a decompiler. Where I can run a loop trough that lists and save them. But I don't find such a function.
I am working on a MacBook Air (2013)
If you use OpenNI/SimpleOpenNI I recommend a nicer option: use the the .oni format (which stores both depth and rgb streams). All you have to do is:
Record to an .oni file (fast/realtime)
Read the depth/color streams from the recorded .oni streams when you need to.
To record to an .oni file you've got two options:
Use the Examples > Contributed Libraries > SimpleOpenNI > OpenNI > RecorderPlay sketch to record (some explanations at the bottom of this answer)
Use OpenNI SDK's NiViewer utility which can also save/load .oni files. (You can easily install this using homebrew: brew install homebrew/science/openni2. The path in this case will be something like /usr/local/Cellar/openni2/2.2.0.33/share/openni2/tools/NiViewer)
Once you have your .oni file, you can easily read it/play it back at different rate and access depth/rgb streams to save to disk.
Regarding your existing program
The frame rate drops because in the same thread it's encoding and writing two images to disk per frame. You can improve this by:
saving to an uncompressed format (like tiff)
threading the image save operation (see the bottom of this answer for some ideas)
I use WebGL.
Is there a way to modify a .obj file (or another 3D file) for example on photoshop? When I do some change on such a file, I would like it to keep the changes I made when I load it on a website.
I know it is not possible with .obj, so is there another format to do this?
I think you can load some 3d files into photoshop these days, but I'm pretty sure you will not be able to modify them.
To modify .obj files you can use 3d software such as 3DS Max or Maya, or you could use an online tools such as the three.js editor or claria.io
There are probably various ways to achieve what you want.
If I understood you correctly, you actually have multiple questions, here are some answers:
Which 3D editing software could I use?
If you want to perform some modification with a powerful 3D modeling tool, I would recommend to download and try blender. It is completely free, but in fact a very advanced 3D modeling software.
In case you just want to smooth your mesh, simplify it, or apply some other generic operation like this, then MeshLab might be sufficient (also available for free).
Which 3D model format should I use for delivering my 3D asset on the Web?
If you use X3DOM for displaying your 3D file, you can use the standardized X3D format (like OBJ, X3D content can be imported / exported in both, blender and meshlab). This has the advantage that you can use X3DOM's inline tag and directly import an X3D file, which means you can edit the 3D content without needing to re-convert your data for the Web.
However, using OBJ, X3D or whatever text-based delivery format might not be the wisest choice if your 3D asset is large, since this will introduce long download times. Therefore, if you have complex assets / scenes, you should also consider to convert your 3D assets to a compact, optimized delivery format for the Web, such as glTF or SRC.
HOw ot compare 3D image files in testcomplete. My application processes some 3D images i want it to be compared with the reference. Image file types are .spt, .vtk, .mdb ,.dcm.
Someone help me.
You can probably use checkpoints for this purpose. For example:
To verify an image displayed on screen, use a region checkpoint.
To verify the actual file that holds the image data, use a file checkpoint.
Well, for DICOM images you could think about converting those into bitmaps and have TestComplete compare the bitmaps. Admitted, there is one additional step that you have to take care of, and this is the choice of a (command line) tool that does the conversion for you. I think IrfanView does the job. Give it a try and post your results.
I am trying to do animations on iPhone using OpenGL ES. I am able to do the animation in Blender 3D software. I can export as a .obj file from Blender to OpenGL and it works on iPhone.
But I am not able to export my animation work from Blender 3D to OpenGL. Can anyone please help me to solve this?
If you have a look at this article by Jeff LaMarche, you'll find a blender script that will output a 3D model to a C header file. There's also a followup article that improves upon the aforementioned script.
After you've run the script, it's as simple as including the header in your source, and passing the array of vertices through your drawing function. Ideally you'd want a method of loading arbitrary model files at runtime, but for prototyping this method is the simplest to implement.
Seeing as you already have a method of importing models (obj) then the above may not apply. However, the advantage of using a blender script is that you can then modify the script to suit your own needs, perhaps also exporting bone information or model keyframes.
Well first off, I wouldn't recommend .obj for this purpose since the obj file format doesn't support animation, only static 3D models. So you'll need to export the animation data as a separate file that you load at the same time as the obj.
Which file format I would recommend depends on what exactly your animations are. I don't remember off the top of my head what file formats Blender supports, but as I recall it does not export Collada files with animation, which would be the most general recommendation. Other options would be md2 for character animations, or 3ds for simple "rigid objects moving around" animations. I think Blender's FBX exporter will work, although that file format may be too complicated for your needs.
That said, and assuming you only need simple rigid object movements, you could use .obj for the 3D model shapes and then write a simple Python script to export a file from Blender that has at the keyframes listed, with the frame, position, and rotation for each keyframe. Then load that data in your code and play back those keyframes on the 3D model.
This is an old question and since then some new iOS frameworks have been released such as GLKit. I recommend relying on them as much as possible when you can, since they take care of many inherent conversions like this, though I haven't researched the specifics. Also, while not on iOS, the new Scene Graph technology for OS X (which will likely arrive on iOS) in the future, take all this quite a bit further and a crafty individual could do some conversions with that tool and then take the output to iOS.
Also have a look at SIO2.
I haven't used recent versions of Blender, but my understanding is that it supports exporting mesh animation as a sequence of .obj files. If you can already display a single .obj in your app, then displaying several of them one after another will achieve what you want.
Now, note that this is not the most efficient form to export this type of animation, since each .obj file will have a lot of duplicated info. If your mesh stays fixed over time (i.e. only the vertices move with the polygon structure, uv coords, etc. all fixed) then you can just import the entire first .obj and from the rest just read the vertex array.
If you wanted to optimize this even more, you could compress the vertex arrays so that you only store the differences from the previous frame of the animation.
Edit: I see that Blender 2.59 has export to COLLADA. According to the Blender manual, you can export object transformations, and you can also export baked animation for rigged objects. The benefit for you in supporting the COLLADA format in your iPhone app is that you are free to switch between animation tools, since most of them export this format.