Kinect+Processing: detect where user arm+hand is pointing? - processing

I have an idea for a project but, as I've never used the kinect before I want to know if it can be done with it. What I want to do is to find out where a user is pointing to (in 3d space). Then ok, I want to detect the skeleton of his arm (and I saw this can be done) and then virtually 'extend' it (drawing a line for example) to check where is pointing in the space. Basically I will have a wall and I want to find out where (which area) the user arm is pointing on that wall (and the user won't be touching the wall, of course).
If you know any interesting source I'd really appreciate it.
sorry for my english.
thanks

Getting the arm's skeleton and projecting won't help you: there is no way of telling where the Kinect is relative to the screen, so you won't know to what point in the screen the user is pointing. You could achieve something like that using calibration: ask the user to point to the center of the screen, save that coordinate, and then do all the calculations relative to that one.

Related

Pygame Trail Leaving

Alright so, i'm currently working on a big pygame project. I'm actually working on building a tank game, based on space invaders. The issue is(i mean it's not necessary, but whatever) that whenever my tank moves left or right, i want him to leave a certain trail. Now, the trail is an image of an actual tank trail, and i'd like to keep displaying that image after he moves left, right, down or up, exactly like he's leaving a trail;. The point is that i want my game to look very cool, and i think this is a big extra to that. I won't post my code since i just need general instructions on this topic, not a specific code clarification or something else.
Thank you all in advance, you're awesome! :D
P.S. Here's a trail image! I'm srr it's a link tho :(
https://i.stack.imgur.com/177IS.png
You would have to create an image for the tread marks and then load it using transparency/alpha. Then as you drive you would need to add those to a list with positions trailing the tank as it drove. You would need to keep the entire list of the trail and keep drawing it onto the background as you redraw the screen.
One thing to keep in mind is that if the tank drives back over them, you will not want them visible over the tank, so you will need to layers or just make sure that the tank is drawn last.

Joint for physical gun slider?

I want to make it so when I fire my gun the slider shoots back and then comes forward like a real gun. However I don't know how I should start because I can't find any relevant information on Google and I don't know what to search. All I need is the logic behind how to do it, I can code it myself.
I would agree with zambari.
Just create an animation and play it when your guns gets fired.
Edit:
Since you are talking about 3D, you could either
move the pivot of the object to the point where the trigger would be attached. This way, all you need to do is change the objects rotation for the animation.
use joints
This is the best tool for VR guns with interactable parts. I HIGHLY recommend looking at this to ANYONE making a VR game
https://github.com/Oyshoboy/weaponReloadVR

how to use texture masks in game Maker?

First off I'm not totally sure if "texture masks" is the correct term to use here so If someone knows what it is then please let me know.
so the real question. I want to have an object in GameMaker: Studio which as it moves around it's texture changes depending on its position by pulling from a larger static image behind it. I've made a quick gif of what it might look like.
It can be found here
Another image that might help explain this is the "source-in" section of this image.
This is a reply to the same question posted on the steam GML forum by MrDave:
The feature you are looking for is draw_set_blend_mode(bm_subtract)
Basically you will have to draw everything onto a surface and then using the code above you switch the draw mode to bm_subtract. What this will do is rather than drawing images to the screen it will remove them. So you now draw blocks over the background and this will remove that area. Then you can draw everything you just put on the surface onto the screen.
(Remember to reset the draw mode and the surface target after. )
Its hard to get your head around the first time, but actually it isn’t all that complex once you get used to it.

Augmented reality: Rendering function

My question builds up on this thread: Computer Vision / Augmented Reality: how to overlay 3D objects over vision? and on its first answer. I want to build an application that projects on real time the position of a fictional 3D object into a video feed, but the first step I have to take is: How can I do this over a single image?
What I am going for at the moment is having some kind of function that given a picture, its 6D pose (position + orientation), a 3D object (on fbx, 3ds, or something easily convertable to or from others), and its own position and orientation, returns me the projection of the 3D object over the image. Once I have that, I should be able to apply it over every frame of the video feed (how will I get the 6D information of the camera is a problem I'll deal with later)
My problem is that I am unsure where to find such a function, if it even exists. It should be offered like some kind of script or API so an external program can make use of it. Where should I look? Unity? Some kind of OpenCL functionality? So far my reading has not given me any conclusive answers, and as I am a novice in the topic, I'm sure a steep learning curve is ahead and I'd rather put my efforts on the right direction. Thank you
Indeed there's an API for that.
https://developer.vuforia.com
read the GetStarted page.
On this site, there is a "Target Manager", you'll want to upload your target images. Those will allow you to display the 3D object that you want.
On the same "page" you can have several target images.
Example : One that display your 3D object when visible, one that makes it rotates when hided. etc ...
For the real time projection video part, I will make the assumption that, on Unity, you can have a movie texture running on a plane in background and sort your layers in a way that your 3D object is above.
Please update the topic whenever you find a way.
Bye

Making a follow camera in Motionbuilder

I am using VICON c3d files of someone walking, and I want to change the point of view to be first person. I started by setting the original camera to eye level, but when the character starts walking, the camera doesn't follow the eyes, so it turns into a view from behind.
Your question isn't programming related on any level whatsoever. But if you want an answer, its that you need to constrain the camera to the head/eyes so that it follows the eyes in the first-person manner you want.

Resources