I've been stuck for the last two weeks on updating the threejs_mousepick.html example from an old THREE.js release to the current one. Oh, yeah, I am a newbie to programming.
I've created a Fiddle, hopping someone would spend sometime helping me. CANNON.js is a great API and it is sad to see that examples are so old/unusable with today's THREE.js. I understand it is a lot of work and I am feeling to help but I need some help first. So, if #schteppe you read this, get in touch: I am willing to spend some time working on this.
The answer is as broad as the question.
Using of THREE.Raycaster() and THREE.Plane() simplifies the things a lot. It allows to rid off such functions as projectOntoPlane, findNearestIntersectingObject, getRayCasterFromScreenCoord, and shortens the setScreenPerpCenter function (its name is ridiculous, but I left it as it was) to just one line.
jsfiddle example r87
gplane is THREE.Plane():
var gplane = new THREE.Plane(), gplaneNormal = new THREE.Vector3();
As it written in the descripting comment, we create a virtual plane, which we move our point of joint on.
function setScreenPerpCenter(point) {
gplane.setFromNormalAndCoplanarPoint(gplaneNormal.subVectors(camera.position, point).normalize(), point);
}
Here, we set our plane from a normal and a coplanar point, where the normal is a normalized vector of subtraction between the position of the camera and the point of click on the cube, and the point is that point of click itself. Read about that method here
Related
I am currently trying to dive into the topic of WebGL shaders with THREE.js. I would appreciate if someone could give me some starting points for the following scenario:
I would like to create a fluid-like material, which either interacts with the users mouse or «flows» on it's on.
a little like this
http://cake23.de/turing-fluid.html
I would like to pass a background image to it, which serves as a starting point in terms of which colors are shown in the «liquid sauce» and where they are at the beginning. so to say: I define the initial image which is then transformed by a self initiated liquid flowing and also by the users interaction.
How I would proceed, with my limited knowledge:
I create a plane with the wanted image as a texture.
On top (between the image and the camera) I create a new mesh (plane too?) and this mesh has some custom vertex and fragment shaders applied.
Those shaders should somehow take the color from behind (from the image) and then move those vertices around following some physical rules...
I realize that the given example above has unminified code, but still it is so much, that I can't really break it down to simpler terms, which I fully understand. So I would really appreciate if someone could give me some simpler concepts which serve as a starting point for me.
more pages addressing things like this:
http://www.ibiblio.org/e-notes/webgl/gpu/fluid.htm
https://29a.ch/sandbox/2012/fluidwebgl/
https://haxiomic.github.io/GPU-Fluid-Experiments/html5/
Well, anyway thanks for every link or reference, for every basic concept or anything you'd like to share.
Cheers
Edit:
Getting a similar result (visually) like this image would be great:
I'm trying to accomplish a similar thing. I am being surfing the web a lot. Looking for any hint I can use. so far, my conclusions are:
Try to support yourself using three.js
The magic are really in the shaders, mostly in the fragments shaders it could be a good thing start understanding how to write them and how they work. This link is a good start. shader tutorial
understand the dynamic (natural/real)behavior of fluid could be valuable. (equations)
maybe, this can help you a bit too. Raindrop simulation
If you have found something more around that, let me know.
I found this shaders already created. Maybe, any of them can help you without forcing you to learn a plenty of stuff. splash shaders
good luck
I'm working on a project where I'm capturing people making free throw shots via a video camera. I need a way to detect, as fast as possible, the instant the ball is released from a player's hand. I tried researching a lot of detection/tracking algorithms, but everything I've found seemed more suited to tracking the ball itself. While I may eventually want to do that, right now all I need to know is the release timing.
I'm also open to other solutions that don't use the camera (I have a decent budget), but of course I'd like to use the camera if possible/fast enough. I'm also able to mess with the camera positioning/setup, and what I even want in the FOV.
Does anyone have any ideas? I'm pretty stuck right now, and haven't been able to find anything online that can help.
A solution is to use visual markers (motion trackers) on the throwing hands and on the ball. The precision is based on the FPS of the camera.
The assumption is that you know the ball dimension and the hand grip on the ball that may vary. By using visual markers/trackers you can know the position of the ball relative to the hand. When the distance between the initial grip of the ball and the hand is bigger than the distance between the center of the ball and it's extremity then is when you have your release. Schema of the method
A better solution is to use a graded meter bar (alternate between black and white bars like the ones used on the mythbusters show to track the speed of objects). At the moment there is a color gap between the hand and the ball you have your release. The downside of this approach is that you have to capture the image at a side angle or top-down angle and use panels to hold the grading.
Your problem is similar to the billiard ball collision detection. I hope you find this paper helpful.
Edit:
There is a powerful tool, that is not that expensive named Microsoft Kinect used for motion capture. The downside of this tool is that it's camera works with 30 fps and you cannot use it accurately on a very sunny scene. However I have found a scientific paper about using kinect to record athletes, including free-throws in basketball. Paper here
It's my first answer on so. Any feedback on how to improve my future answers is appreciated.
Currently working on a project to read the gauges on our chiller. I'm not a programmer by trade, so I'm trying to learn as I go, but SimpleCV's documentation isn't that great (IMO...)
At the moment I'm doing a findLines on each image, and it sort of works, but will occasionaly find a "line" on the edge of the gauge itself, or return some other weird result.
What I'd like to do is paint the gauge pivot one color, and the tip of the needle another color, and measure the angle between the two. I think I have the color blob detection figured out, but I can't figure out the measuring the angle part.
Anyone have any ideas? All I need is the angle to be returned, the BMS system will accept the angle reading and do the scale conversion itself, so that isn't a problem.
one of the core simplecv developers here. Sorry the docs aren't up to snuff.
I think if you can paint the gauge it will probably make it easier, or you may not even need to.
I whipped up the example here as you can see image output along the way as well:
http://nbviewer.ipython.org/github/xamox/sandbox/blob/master/gas-gauge-angle/Gauge%20Angle.ipynb
I am trying to create a distructable floor in chipmunk
Pretty much I need a floor or body object that when a bomb explodes that the floor disappears in the area of the balls defined explosion area.
I considered useing a CPpoly shape to do this and define the vertexes each time that a bomb exploded, but discovered that this was not only intractable, but practically impossible.
Does anybody have any ideas on how I could do this in chipmunk? and sorry I am relatively new to the language and know literally only the basics. Thank you for your help.
Deformable terrain with a physics engine isn't easy. I've been making an add-on library for Chipmunk that can help do it though. http://www.cocos2d-iphone.org/forum/topic/20792
Basically you are going to have to reconstruct the geometry every time something changes. Your choices include marching squares or CSG. Neither are particularly easy to make run in real time though.
I am using VICON c3d files of someone walking, and I want to change the point of view to be first person. I started by setting the original camera to eye level, but when the character starts walking, the camera doesn't follow the eyes, so it turns into a view from behind.
Your question isn't programming related on any level whatsoever. But if you want an answer, its that you need to constrain the camera to the head/eyes so that it follows the eyes in the first-person manner you want.