How can i create a controlled stand-alone 3D viewer? - animation

I'm trying to create a custom 3D viewer that can show up a rigged model created with blender. Particulary, the model will be a mechanical one, so in need to move them with an IK solver: i ll move/rotate a single bone (which will be the motor axes) and all other components must follow it.
I've already done the Blender part (see link below), but now i'm stuck on how to make a stand-alone viewer (ideally a single executable file) that can communicate with another application/process (maybe with TCP socket?) and move the 3d model based on the information sent by it. I'm working on Win10.
Does anyone have any ideas?
(sorry my bad english, I hope explanation was thorough)
https://youtu.be/5rR7BKrGzFg

Related

motion capture with a Kinect v1 in processing

Hello there I was wondering if anyone could help me with something
I have recently been giving a task to do from teachers at college and. I hope to achieve this is through motion capture.
The other lecturers' teacher sound art and film art, so I plan to create a program that will track the participant's movements and displaying the movement on screen with ether set or random colours.
I would also like use to the sound part of this project through the participant's movements, but by either changing the pitch of noise through movement or by changing the speed of the sound through movement.
I have manged to get a 360 xbox Kinect 1414 to work in processing and what played around with the motion tracking but can’t seem to figure out how to attach an ellipse to the hands. I hope someone can help me and that it doesn’t seem much of a hellish task.
if you can help here is my email address (alicebmcgettigan#gmail.com)
(if this is impossible I would understand as I tend to make life difficult for myself haha)
You will need a middleware library that can provide skeleton tracking data from depth data.
One option on Windows is the Kinect for Windows Processing library which uses the Kinect SDK.
There is another library called SimpleOpenNI which works on multiple operating systems.
The official version is not longer updated for Processing 3 (does work with Processing 2.2.1 though.). Fortunately you can find an updated fork of the SimpleOpenNI library on github
To manually install the library:
select the version of the library for your version of Processing (e.g. for Processing 3.5.3 go to SimpleOpenni Processing_3.5.3). It should be one of 3.5.3, 3.5.2, 3.4, 3.3.7, 3.3.6, or 2.2.1 (otherwise you may to install one of these Processing versions)
Click Clone or download > Download ZIP (on the top right side of the repo)
Unzip and the contents and within the folder select the SimpleOpenNI folder that contains a folder named library:
Move this nested SimpleOpenNI folder (containing the library folder) to Documents/Processing/libraries
Restart Processing (if it was already running)
Go to Processing > Examples > Contributed Libraries > SimpleOpenNI > OpenNI and start playing with the examples
Other notes:
To track a user start with the User and User3d examples
notice context.getCoM() returns the centre of mass: a single point while context.getJointPositionSkeleton() can get you position of a hand in 3D
you can use context.convertRealWorldToProjective() to convert from a 3D position to a project 2D position on screen
Once the skeleton tracking is locked to a person you can get the joint position for each hand, but it's worth noting there's a separate hand tracker functionality: checkout the Hands / Hands3d examples. Depending on how you want to track participants / what the environment is / what the motions choose the option that works the best
Speaking of the environment bare in mind the Xbox 360 kinect is susceptible to infrared light interference (for example bright incandescent lights, direct sunlight, etc.): this will deterioriate the depth map quality which in turn affects skeleton tracking. You would want to have as much control over lighting as possible and have ideal lighting conditions.
test ! test ! test ! :) think of the interaction and the environment (sketching on paper first can be useful), for each assumption run a basic test to prove that it works or not. Use iterations to learn how to change either the environment or interaction to make it work.
Check out the RecorderPlay example: it records a .oni file which contains both RGB and depth data. This is super useful because it allows you to record on site in areas where you might have limited time access and it will save you time not having to go back and forth between your computer and in front of the kinect. (Once you initialize SimpleOpenNI with the path to the .oni file (e.g. context = new SimpleOpenNI(this,recordPath);) you can run the skeleton tracking and everything using the recording
If you want to see more about Kinect and Processing check out Daniel Shiffman's Getting started with Kinect and Processing page
Have fun!

3DS Max - How to display bones as floating objects instead of lines beneath a mesh?

So I downloaded a rigged model for 3DS Max and it had something I'd never seen before. Bones as globes and rings floating outside the mesh for easy access and convenience instead of having to constantly access the object hierarchy or switch between layers to make sure everything is animating properly. How do I set up a model like this, or change a model rigged with regular lines between points as bones into a model like this?
Rigged model with bone "helpers?"
I see you are new here, but this website is for purely programming related questions. How to use any particular software is not the focus of this particular website.
I would suggest you take 3dsmax questions to cgtalk.com or perhaps as well to autodesk's own support website. Hm... looks like their old website for discussion forums, www.the-area.com doesn't work anymore. But I found this link: http://forums.autodesk.com/t5/3ds-max/ct-p/area-c1

How to use CRUD with Interactive 3d model? - Three.js

I am new to learn Three.js and I am not sure it's exactly suit for my needs!
I am creating web application using PHP to keep car accident details.
I was wondering if it's possible to have interactive 3d car object and I can mark the damage on the car with mouse click and post to server.(Create,Read,Update,Delete).
So when ever I open the client details I can see the damages that already mark on the car.
Could someone point me the right direction.
How can I achieve this?
You can have car model and you can write some javascript code to change this car model by mouse clicking. Problem is that you have to learn a lot about Three.js, 3d graphic and javascript. So the right direction is to start learning those 3 things. Three.js is not a 3d modeling tool so if you want to create car models in the browser you will have to write your own 3d modeling code. When you will have this Read, Update and Delete will just work. You will only have to add some code to write changes made in browser to your server.

What to use for creating custom 3D objects and animations?

I'm used to working with SolidWorks and Catia for 3D object and basic animations but i got a personal project that i work on where i need to build a LASER manufacturing machine and i need to make it look like it is in real life, make zoom inside the machine and see how the light gets polarized as it passes through a crystal, see how the laser beam hits the surface and particles fly of the hit surface,etc.
I thought about Unity Engine but don't know much about this area of 3D and in what program to do the models before importing into Unity, can you guys help me with better solutions ?
Thanks,
Adrian
If you want to use Unity, you can import the following file formats: .FBX, .dae, .3DS, .dxf, .obj (http://docs.unity3d.com/Manual/HOWTO-importObject.html)
From Solidworks, you can export as an STL, then import the STL into a 3d Application (ex. Blender), and in Blender you could export the model as an FBX.
I think you're able to achieve more in terms of animation with unity. You can design your model documents in SolidWorks or CATIA and save them in IGES (neutral format).

3D human Hand design and control via Arduino

I want to design a 3D human hand and control it via signal generated from my Arduino kit. I designed a 3D hand in Blender but how to give the signal generated from Arduino to add life into it. Which tool I should use.
For example I have designed an arbitrary frequency generator. And I want at a particular frequency the Hand will mimic Pinching, or to Fist. Which tool I can use to use the generated signal as input to a Programming interface and output of the Program as a Animated 3D hand.
Please help guys......
Thanks in Advance.
Python would seem the obvious solution to this, as it can interface directly with Blender.
I'm not sure how your controller works with frequencies, but chances are there's a library or a way to handle it in python.
I would suggest looking at this forum post to learn how to set up the animation you want via python script.
Python can then be used to render a series of images (of the hand) like this:
for i in range(last_frame):
bpy.ops.anim.change_frame(frame = i)
bpy.data.scenes['Scene'].render.filepath = '/home/user/Pictures/frame%d.jpg'%i
bpy.ops.render.render()

Resources