i have used anaglypheffect in three.js
anaglyph effect example
and tried to update the shader equation to make the object get out from screen not the depth inside screen , any ideas can help ?
also is this possible wiht any other 3D vision algorithm ?
You do not need a shader for anaglyph. You simply render the scene twice, once for each eye. For left you only render to red, for right only green and blue. This is easy to do with a color write mask in gl. The 3d effect is controlled by having a different camera matrix for those two passes. By changing them the effect will change. No need to touch the shader. http://en.wikipedia.org/wiki/Anaglyph_3D
To change the apparent distance of things - behind or past the screen - depends on how you set up the two cameras wrt how two eyes looking at the screen. So if you make the two cameras intersect closer to their origin things will "pop out" more.
As Explained in this article Calculating Stereo Pairs , i have to change focalLength,eyeSep,eyeSepOnProjection to achieve desired effect manipulating one of these variables changes the depth on 3D vision.
In AnaglyphEffect.js the variable focalLength represents the distance at which a target will look on the screen plane, and is defined at 125. Further objects will look behind the screen. The eye separation is defined from the focal length as you can see further :
var eyeSep = focalLength / 30 * 0.5;
var eyeSepOnProjection = eyeSep * _near / focalLength;
The focal length can be added as a parameter in the main function so it can be accessed and modified directly from the main script :
THREE.AnaglyphEffect = function ( renderer, focalLength, width, height) {
...
if ( focalLength === undefined ) focalLength = 125;
this.focalLength = focalLength;
...
}
Related
I have a grid of points (object3D's using THREE.Points) in my Three.js scene, with a model sat on top of the grid, as seen below. In code the model is called default mesh and uses a merged geometry for performance reasons:
I'm trying to work out which of the points in the grid my perspective camera can see at any given point i.e. every time the camera position is update using my orbital controls.
My first idea was to use raycasting to create a ray between the camera and each point in the grid. Then I can find which rays are being intersected with the model and remove the points corresponding to those rays from a list of all the points, thus leaving me with a list of points the camera can see.
So far so good, the ray creation and intersection code is placed in the render loop (as it has to be updated whenever the camera is moved), and therefore it's horrendously slow (obviously).
gridPointsVisible = gridPoints.geometry.vertices.slice(0);
startPoint = camera.position.clone();
//create the rays from each point in the grid to the camera position
for ( var point in gridPoints.geometry.vertices) {
direction = gridPoints.geometry.vertices[point].clone();
vector.subVectors(direction, startPoint);
ray = new THREE.Raycaster(startPoint, vector.clone().normalize());
if(ray.intersectObject( defaultMesh ).length > 0){
gridPointsVisible.pop(gridPoints.geometry.vertices[point]);
}
}
In the example model shown there are around 2300 rays being created, and the mesh has 1500 faces, so the rendering takes forever.
So I 2 questions:
Is there a better of way of finding which objects the camera can see?
If not, can I speed up my raycasting/intersection checks?
Thanks in advance!
Take a look at this example of GPU picking.
You can do something similar, especially easy since you have a finite and ordered set of spheres. The idea is that you'd use a shader to calculate (probably based on position) a flat color for each sphere, and render to an off-screen render target. You'd then parse the render target data for colors, and be able to map back to your spheres. Any colors that are visible are also visible spheres. Any leftover spheres are hidden. This method should produce results faster than raycasting.
WebGLRenderTarget lets you draw to a buffer without drawing to the canvas. You can then access the render target's image buffer pixel-by-pixel (really color-by-color in RGBA).
For the mapping, you'll parse that buffer and create a list of all the unique colors you see (all non-sphere objects should be some other flat color). Then you can loop through your points--and you should know what color each sphere should be by the same color calculation as the shader used. If a point's color is in your list of found colors, then that point is visible.
To optimize this idea, you can reduce the resolution of your render target. You may lose points only visible by slivers, but you can tweak your resolution to fit your needs. Also, if you have fewer than 256 points, you can use only red values, which reduces the number of checked values to 1 in every 4 (only check R of the RGBA pixel). If you go beyond 256, include checking green values, and so on.
I just started learning OpenGL and cocos2d and I need an advice.
I'm writing a game in which player is allowed to touch and move rectangles on the screen in a top-down view. Every time a rectangle is touched, it moves up (towards the screen) in z direction and is scaled a bit to look like it's closer than the rest. It drops down to z = 0 after touch ends.
I'd like the risen rectangles to drop shadow under them, but can't get it to work. What approach would you recommend for the best result?
Here's what I have so far.
During setup I turn on the depth buffer and then:
1. all the textures are generated with CCRenderTexture
2. the generated textures are used as an atlas to create CCSpriteBatchNode
3. when a rectangle (tile) is touched:
static const float _raisedScale = 1.2;
static const float _raisedVertexZ = 30;
...
-(void)makeRaised
{
_state = TileStateRaised;
self.scale = _raisedScale;
self.scale = _raisedScale;
self.vertexZ = _raisedVertexZ;
_glowOverlay.vertexZ = _raisedVertexZ;
_glowOverlay.opacity = 255;
}
glow overlay is used to "light up" the rectangle.
After that I animate it using -(void)update:(ccTime)delta
Is there a way to make OpenGl cast the shadow for me using cocos? For example using shaders or OpenGL shadowing. Or do I have to use a texture overlay to simulate the shadow?
What do you recommend? How would you do it?
Sorry for a newbie question, but it's all really new to me and I really need your help
.
EDIT 6th of March
I managed to get sprites with shadow overlay show under the tiles and it looks ok until one tile has to drop shadow on another which has a non-zero vertexZ value. I tried to create additional shadow sprites which would be scaled and shown on top of the other tiles (usually rising or falling down), but I have problems with animation (tile up, tile down).
Why complicate the problem.
Simply create a projections of how the shadow would look like using your favourite graphics editing program and save it as a png. When the object is lifted, insert your shadowSprite behind your lifted object (you can shift it left/right depending on where you think your light source is).
When the user drops the object down, the show can remain under the object and move with it, making it self visible when the item is lifted again.
Hi, I am making a car game where I draw a car shape Rectangle as follows. xP and yP is coming dynamically from the keyboard event in JavaScript and so is the rotation.
ctxDrift.clearRect(0, 0, 426, 754);
ctxDrift.save();
ctxDrift.beginPath();
ctxDrift.translate(xP-car.getWidth()/2, yP-car.getHeight()/2);
ctxDrift.rotate((Math.PI / 180) * car.getRotation());
ctxDrift.translate(-xP, -yP);
ctxDrift.rect(xP-car.getWidth()/2, yP-car.getHeight()/2, car.getWidth(), car.getHeight());
ctxDrift.fillStyle = 'yellow';
ctxDrift.fill();
ctxDrift.restore();
Now there are some obstacles, with Rectangle shape, which are not rotated. Now how could I check the hit between these 2 objects. Or say how to check the rectangle points lies inside another rectangle, if rotated?
Even before you even get started with collision testing:
Canvas does not track where your objects are on the canvas. You must manually keep track of the accumulated .translate() and .rotate() done by the user. You do this by capturing the transformation matrix changes for each user keyboard event. Then you accumulate the transforms into one final transformation matrix that you can use to start hit testing.
From there, the math on collision testing gets quickly complicated!
Your simplest collisiion test is simply to surround each rectangle with a circle and then calculate whether the circle centerpoints are within the sum of the 2 circle radii. The code looks like this:
function CirclesCollide(x1,y1,radius1,x2,y2,radius2){
return ( Math.sqrt( ( x2-x1 ) * ( x2-x1 ) + ( y2-y1 ) * ( y2-y1 ) ) < ( radius1 + radius2 ) );
}
If you want better collision testing and you're willing to wade through LOTS of math, here is a good source of 3 collision tests: http://www.sfml-dev.org/wiki/en/sources/simple_collision_detection
Perhaps the best solution is to use a canvas library like FabricJs which tracks where your objects are on the canvas and provides the hit-testing for you. Easy as this!
var theyAreColliding = myCar.intersectsWithObject(myObstical);
The easiest way is to rotate the rectangle bounding boxes, so they are essentially no longer rotated, before you do the collision check. Then rotate them back before the image is drawn.
Even better, have a bounding box that doesn't rotate which can be used for broad-phase testing (a quick and cheap check to see if you need to then do a narrow-phase check).
This is known as an axis-aligned bounding box, or AABB for short. This greatly simplifies your collision detection code.
update: Found this link that might be useful.
This is what i am looking for this Query
http://www.rgraph.net/blog/2012/october/new-html5-canvas-features.html
canvas have now addHitRegion() function, where we can track easily for this.
New One and best
http://www.playmycode.com/blog/2011/08/javascript-per-pixel-html5-canvas-image-collision-detection/
I have finally added my own logic, which is here
http://jslogic.blogspot.in/2013/02/javascript-bound-rectangle-area-while.html
I am writing a particle engine for iOS using Monotouch and openTK. My approach is to project the coordinate of each particle, and then write a correctly scaled textured rectangle at this screen location.
it works fine, but I have trouble calculating the correct depth value so that the sprite will correctly overdraw and be overdrawn by 3D objects in the scene.
This is the code I am using today:
//d=distance to projection plane
float d=(float)(1.0/(Math.Tan(MathHelper.DegreesToRadians(fovy/2f))));
Vector3 screenPos=Vector3.Transform(ref objPos,ref viewMatrix, out screenPos);
float depth=1-d/-screenPos.Z;
Then I am drawing a trianglestrip at the screen coordinate where I put the depth value calculated above as the z coordinate.
The results are almost correct, but not quite. I guess I need to take the near and far clipping planes into account somehow (near is 1 and far is 10000 in my case), but I am not sure how. I tried various ways and algorithms without getting accurate results.
I'd appreciate some help on this one.
What you really want to do is take your source position and pass it through modelview and projection or whatever you've got set up instead if you're not using the fixed pipeline. Supposing you've used one of the standard calls to set up the stack, such as glFrustum, and otherwise left things at identity then you can get the relevant formula directly from the man page. So reading directly from that you'd transform as:
z_clip = -( (far + near) / (far - near) ) * z_eye - ( (2 * far * near) / (far - near) )
w_clip = -z
Then, finally:
z_device = z_clip / w_clip;
EDIT: as you're working in ES 2.0, you can actually avoid the issue entirely. Supply your geometry for rendering as GL_POINTS and perform a normal transform in your vertex shader but set gl_PointSize to be the size in pixels that you want that point to be.
In your fragment shader you can then read gl_PointCoord to get a texture coordinate for each fragment that's part of your point, allowing you to draw a point sprite if you don't want just a single colour.
This is kind of frustrating me as I've been grizzling over it for a couple of hours now.
Basically I'm drawing 2D sprites through spritebatch and 3D orthographically projected geometry using the BasicEffect class.
My problem is controlling what gets rendered on top of what. At first I thought it would be simply controlling the render order, i.e. if I do:
Draw3DStuff()
SpriteBatch.Begin(...)
Draw2DStuff();
SpriteBatch.End();
It would mean the 2D stuff would render over the 3D stuff, however since I don't control when the device begins/ends renders this isn't the result. The 3D always renders on top of the 2D elements, no matter the projection settings,world translation, the z components of the 3D geometries vertex definitions and the layer depth of the 2D elements.
Is there something I'm not looking into here?
What's the correct way to handle the depth here?
OK I figured it out 2 seconds after posting this question. I don't know if it was coincidence or if StackOverflow has a new feature granting the ability to see future answers.
The Z position of spritebatch elements are between 0 and 1 so they're not directly comparable to the z positions of orthographic geometry being rendered.
When you create an orthographic matrix however you define a near and far clip plane. The Z pos you set should be within this clip plane. I had a hunch that the spritebatch class is effectively drawing quads orthographically so by extension that 0 to 1 would mean 0 was representing a near clip and 1 a far clip, and the depth was probably being rendered into the same place the 3D geometry depth is being rendered to.
Soooo, to make it work I just figured that the near/far clips I was defining for the orthographic render will be measured against the near/far clips of the sprites being rendered, so it was simply a matter of setting the right z value, so for example:
If I have a near clip of 0 and a far clip of 10000 and I wanted to draw it so that it would correspond to 0.5f layer depth and render in front of sprites being drawn at 0.6 and behind sprites being drawn at 0.4 I do:
float zpos = 0.5f;
float orthoGraphicZPos = LinearInterpolate(0, 10000, zpos);
Or just zpos * 10000 :D
I guess it would make more sense to have your orthographic renderers near/far clip to be 0 and 1 to directly compare with the sprites layer depths.
Hopefully my reasoning for this solution was correct (more or less).
As an aside, since you mentioned you had a hunch on how the sprite batch was drawing quads. You can see the source code for all the default/included shaders and the spritebatch class if you are curious, or need help solving a problem like this:
http://create.msdn.com/en-US/education/catalog/sample/stock_effects
The problem is that the spritebatch messes with some of the renderstates that are used when you draw your 3d objects. To fix this you just have to reset them before rendering your 3d objects like so.
GraphicsDevice.BlendState = BlendState.Opaque;
GraphicsDevice.DepthStencilState = DepthStencilState.Default;
GraphicsDevice.SamplerStates[0] = SamplerState.LinearWrap;
Draw3DStuff()
SpriteBatch.Begin(...)
Draw2DStuff();
SpriteBatch.End();
Note that this is for xna 4.0 which I am pretty sure your using anyway. More info can be found on shawn hargreaves blog here. This will draw the reset the render states, draw the 3d objects, then the 2d objects over them. Without resetting the render states you get weird effects like your seeing.