Making a follow camera in Motionbuilder - animation

I am using VICON c3d files of someone walking, and I want to change the point of view to be first person. I started by setting the original camera to eye level, but when the character starts walking, the camera doesn't follow the eyes, so it turns into a view from behind.

Your question isn't programming related on any level whatsoever. But if you want an answer, its that you need to constrain the camera to the head/eyes so that it follows the eyes in the first-person manner you want.

Related

Joint for physical gun slider?

I want to make it so when I fire my gun the slider shoots back and then comes forward like a real gun. However I don't know how I should start because I can't find any relevant information on Google and I don't know what to search. All I need is the logic behind how to do it, I can code it myself.
I would agree with zambari.
Just create an animation and play it when your guns gets fired.
Edit:
Since you are talking about 3D, you could either
move the pivot of the object to the point where the trigger would be attached. This way, all you need to do is change the objects rotation for the animation.
use joints
This is the best tool for VR guns with interactable parts. I HIGHLY recommend looking at this to ANYONE making a VR game
https://github.com/Oyshoboy/weaponReloadVR

Can't update old code to Three.js 88

I've been stuck for the last two weeks on updating the threejs_mousepick.html example from an old THREE.js release to the current one. Oh, yeah, I am a newbie to programming.
I've created a Fiddle, hopping someone would spend sometime helping me. CANNON.js is a great API and it is sad to see that examples are so old/unusable with today's THREE.js. I understand it is a lot of work and I am feeling to help but I need some help first. So, if #schteppe you read this, get in touch: I am willing to spend some time working on this.
The answer is as broad as the question.
Using of THREE.Raycaster() and THREE.Plane() simplifies the things a lot. It allows to rid off such functions as projectOntoPlane, findNearestIntersectingObject, getRayCasterFromScreenCoord, and shortens the setScreenPerpCenter function (its name is ridiculous, but I left it as it was) to just one line.
jsfiddle example r87
gplane is THREE.Plane():
var gplane = new THREE.Plane(), gplaneNormal = new THREE.Vector3();
As it written in the descripting comment, we create a virtual plane, which we move our point of joint on.
function setScreenPerpCenter(point) {
gplane.setFromNormalAndCoplanarPoint(gplaneNormal.subVectors(camera.position, point).normalize(), point);
}
Here, we set our plane from a normal and a coplanar point, where the normal is a normalized vector of subtraction between the position of the camera and the point of click on the cube, and the point is that point of click itself. Read about that method here

Detecting the release of a ball in real time

I'm working on a project where I'm capturing people making free throw shots via a video camera. I need a way to detect, as fast as possible, the instant the ball is released from a player's hand. I tried researching a lot of detection/tracking algorithms, but everything I've found seemed more suited to tracking the ball itself. While I may eventually want to do that, right now all I need to know is the release timing.
I'm also open to other solutions that don't use the camera (I have a decent budget), but of course I'd like to use the camera if possible/fast enough. I'm also able to mess with the camera positioning/setup, and what I even want in the FOV.
Does anyone have any ideas? I'm pretty stuck right now, and haven't been able to find anything online that can help.
A solution is to use visual markers (motion trackers) on the throwing hands and on the ball. The precision is based on the FPS of the camera.
The assumption is that you know the ball dimension and the hand grip on the ball that may vary. By using visual markers/trackers you can know the position of the ball relative to the hand. When the distance between the initial grip of the ball and the hand is bigger than the distance between the center of the ball and it's extremity then is when you have your release. Schema of the method
A better solution is to use a graded meter bar (alternate between black and white bars like the ones used on the mythbusters show to track the speed of objects). At the moment there is a color gap between the hand and the ball you have your release. The downside of this approach is that you have to capture the image at a side angle or top-down angle and use panels to hold the grading.
Your problem is similar to the billiard ball collision detection. I hope you find this paper helpful.
Edit:
There is a powerful tool, that is not that expensive named Microsoft Kinect used for motion capture. The downside of this tool is that it's camera works with 30 fps and you cannot use it accurately on a very sunny scene. However I have found a scientific paper about using kinect to record athletes, including free-throws in basketball. Paper here
It's my first answer on so. Any feedback on how to improve my future answers is appreciated.

how to use texture masks in game Maker?

First off I'm not totally sure if "texture masks" is the correct term to use here so If someone knows what it is then please let me know.
so the real question. I want to have an object in GameMaker: Studio which as it moves around it's texture changes depending on its position by pulling from a larger static image behind it. I've made a quick gif of what it might look like.
It can be found here
Another image that might help explain this is the "source-in" section of this image.
This is a reply to the same question posted on the steam GML forum by MrDave:
The feature you are looking for is draw_set_blend_mode(bm_subtract)
Basically you will have to draw everything onto a surface and then using the code above you switch the draw mode to bm_subtract. What this will do is rather than drawing images to the screen it will remove them. So you now draw blocks over the background and this will remove that area. Then you can draw everything you just put on the surface onto the screen.
(Remember to reset the draw mode and the surface target after. )
Its hard to get your head around the first time, but actually it isn’t all that complex once you get used to it.

Kinect+Processing: detect where user arm+hand is pointing?

I have an idea for a project but, as I've never used the kinect before I want to know if it can be done with it. What I want to do is to find out where a user is pointing to (in 3d space). Then ok, I want to detect the skeleton of his arm (and I saw this can be done) and then virtually 'extend' it (drawing a line for example) to check where is pointing in the space. Basically I will have a wall and I want to find out where (which area) the user arm is pointing on that wall (and the user won't be touching the wall, of course).
If you know any interesting source I'd really appreciate it.
sorry for my english.
thanks
Getting the arm's skeleton and projecting won't help you: there is no way of telling where the Kinect is relative to the screen, so you won't know to what point in the screen the user is pointing. You could achieve something like that using calibration: ask the user to point to the center of the screen, save that coordinate, and then do all the calculations relative to that one.

Resources