how to drop shadows under sprites in Top-down view with cocos2d - opengl-es

I just started learning OpenGL and cocos2d and I need an advice.
I'm writing a game in which player is allowed to touch and move rectangles on the screen in a top-down view. Every time a rectangle is touched, it moves up (towards the screen) in z direction and is scaled a bit to look like it's closer than the rest. It drops down to z = 0 after touch ends.
I'd like the risen rectangles to drop shadow under them, but can't get it to work. What approach would you recommend for the best result?
Here's what I have so far.
During setup I turn on the depth buffer and then:
1. all the textures are generated with CCRenderTexture
2. the generated textures are used as an atlas to create CCSpriteBatchNode
3. when a rectangle (tile) is touched:
static const float _raisedScale = 1.2;
static const float _raisedVertexZ = 30;
...
-(void)makeRaised
{
_state = TileStateRaised;
self.scale = _raisedScale;
self.scale = _raisedScale;
self.vertexZ = _raisedVertexZ;
_glowOverlay.vertexZ = _raisedVertexZ;
_glowOverlay.opacity = 255;
}
glow overlay is used to "light up" the rectangle.
After that I animate it using -(void)update:(ccTime)delta
Is there a way to make OpenGl cast the shadow for me using cocos? For example using shaders or OpenGL shadowing. Or do I have to use a texture overlay to simulate the shadow?
What do you recommend? How would you do it?
Sorry for a newbie question, but it's all really new to me and I really need your help
.
EDIT 6th of March
I managed to get sprites with shadow overlay show under the tiles and it looks ok until one tile has to drop shadow on another which has a non-zero vertexZ value. I tried to create additional shadow sprites which would be scaled and shown on top of the other tiles (usually rising or falling down), but I have problems with animation (tile up, tile down).

Why complicate the problem.
Simply create a projections of how the shadow would look like using your favourite graphics editing program and save it as a png. When the object is lifted, insert your shadowSprite behind your lifted object (you can shift it left/right depending on where you think your light source is).
When the user drops the object down, the show can remain under the object and move with it, making it self visible when the item is lifted again.

Related

Find which object3D's the camera can see in Three.js - Raycast from each camera to object

I have a grid of points (object3D's using THREE.Points) in my Three.js scene, with a model sat on top of the grid, as seen below. In code the model is called default mesh and uses a merged geometry for performance reasons:
I'm trying to work out which of the points in the grid my perspective camera can see at any given point i.e. every time the camera position is update using my orbital controls.
My first idea was to use raycasting to create a ray between the camera and each point in the grid. Then I can find which rays are being intersected with the model and remove the points corresponding to those rays from a list of all the points, thus leaving me with a list of points the camera can see.
So far so good, the ray creation and intersection code is placed in the render loop (as it has to be updated whenever the camera is moved), and therefore it's horrendously slow (obviously).
gridPointsVisible = gridPoints.geometry.vertices.slice(0);
startPoint = camera.position.clone();
//create the rays from each point in the grid to the camera position
for ( var point in gridPoints.geometry.vertices) {
direction = gridPoints.geometry.vertices[point].clone();
vector.subVectors(direction, startPoint);
ray = new THREE.Raycaster(startPoint, vector.clone().normalize());
if(ray.intersectObject( defaultMesh ).length > 0){
gridPointsVisible.pop(gridPoints.geometry.vertices[point]);
}
}
In the example model shown there are around 2300 rays being created, and the mesh has 1500 faces, so the rendering takes forever.
So I 2 questions:
Is there a better of way of finding which objects the camera can see?
If not, can I speed up my raycasting/intersection checks?
Thanks in advance!
Take a look at this example of GPU picking.
You can do something similar, especially easy since you have a finite and ordered set of spheres. The idea is that you'd use a shader to calculate (probably based on position) a flat color for each sphere, and render to an off-screen render target. You'd then parse the render target data for colors, and be able to map back to your spheres. Any colors that are visible are also visible spheres. Any leftover spheres are hidden. This method should produce results faster than raycasting.
WebGLRenderTarget lets you draw to a buffer without drawing to the canvas. You can then access the render target's image buffer pixel-by-pixel (really color-by-color in RGBA).
For the mapping, you'll parse that buffer and create a list of all the unique colors you see (all non-sphere objects should be some other flat color). Then you can loop through your points--and you should know what color each sphere should be by the same color calculation as the shader used. If a point's color is in your list of found colors, then that point is visible.
To optimize this idea, you can reduce the resolution of your render target. You may lose points only visible by slivers, but you can tweak your resolution to fit your needs. Also, if you have fewer than 256 points, you can use only red values, which reduces the number of checked values to 1 in every 4 (only check R of the RGBA pixel). If you go beyond 256, include checking green values, and so on.

THREE.CanvasRenderer rendering issue, circles leave a 'trace' even though setClearColor is set

This has been puzzling me all day. I have put together a very simple holding page for a website that can be viewed here. It has a very simple three.js animated scene as a background. This is based on one of the examples. Then the user moves the mouse to the top of the window the animation responds to the input. The issue is that there seems to be rendering artefact (see image below or try online in above link)
It seems that there is an unwanted 'trace' like effect that I just can't seem to prevent. The clear color is set 'renderer.setClearColor(0xEFEFEF, 1). I have tried 'renderer.autoClear = true' explicitly but still the artefacts show. It seems to happen when the circle is at the very edge of the drawn area when moving vertically.
I have tried searching everywhere but failed to find an answer. Can anyone help?
UPDATE:
I looked at the link provided by WestLangley where the context.arc scaling looked like a possible cause. It turned out that it was not. I was using a value < 1. I tried a value 1 but the artefacts still show.
I believe that since the drawing area changes each frame (due to the sinusoidal nature of the wave animation) and the changing viewing angle of the camera, the area that is cleared is incorrect. The drawing area that is cleared is the drawing area of the current frame (to be rendered), leaving an uncleared area from the previous frame (already rendered and visible) and possibly artefacts. This only occurs when the drawing area is shrinking and the camera is moving in the opposite direction to the reduction in drawing area. I believe, in order for these artefacts to not show, the drawing area of the previous frame should be cleared and not the area of the current frame before drawing. I am still looking at this.
UPDATE 2:
I seem to have verified the drawing area clearance theory from update 1. By adding:
renderer.getContext().canvas.getContext("2d").clearRect(0, 0, renderer.getContext().canvas.width, renderer.getContext().canvas.height);
before the renderer.render(scene, camera) call, the entire canvas was cleared to white. Since the renderer.setClearColor(0xEFEFEF, 1) is set to slightly off-white, one can clearly see the drawing area.
I decided to add a plane with a transparent material to force the drawing area to be full size of the canvas. I am not 100% happy with this but it seems to be a workaround.
var geo = new THREE.PlaneBufferGeometry(1920 * 2, 1080 * 2, 1, 1);
var mat = new THREE.MeshBasicMaterial({ opacity: 0.01 });
var plane = new THREE.Mesh(geo, mat);
The artefacts are banished.

External elements slowing down canvas

I am developing a game using several canvases (3) on top of one another. I am close to finishing the game and I haven't yet optimized the performance.
Regardless, my main concern is that the game has performed pretty well so far, but being close to finish I am building a simple web page around the canvas to give a frame to the game. I am talking just putting the title of the game and a few links here and there, but suddenly the game is now choppy and slow!!! If remove those elements everything is smooth again.
The culprits are:
The game title above the canvas (styled with text-shadow).
four buttons below the canvas to redirect to other sites and credits.
Is it possible that this few static elements interfere with the rendering of the game?
Thank you.
Anything with shadows, rounded corners or expensive effects such as blur cost a lot to render.
Modern browsers try to optimize this in various way but there are special cases which they can't get around just like that (updated render engines using 3D hardware can help in the future).
Shadows are closely related to blurring and needs to be composited per frame due to the possibility that the background, shadow color, blur range etc. could change. Rounded corners forces the browser to create an alpha mask instead of doing just a rectangular clip. The browser may cache some of these operations, but they'll add up in the end.
Text Shadow
A workaround is to "cache" the shadowed text as an image. It can be a pre-made image from Photoshop or it could be made dynamically using a canvas element. Then display this instead of the text+shadow.
Example
var ctx = c.getContext("2d"),
txt = "SHADOW HEADER";
// we need to do this twice as when we set width of canvas, state is cleared
ctx.font = "bold 28px sans-serif";
c.width = ctx.measureText(txt).width + 20; // add space for shadow
c.height = 50; // estimated
// and again...
ctx.font = "bold 28px sans-serif";
ctx.textBaseline = "top";
ctx.textAlign = "left";
ctx.shadowBlur = 9;
ctx.shadowOffsetX = 9;
ctx.shadowOffsetY = 9;
ctx.shadowColor = "rgba(0,0,0,0.8)";
ctx.fillStyle = "#aaa";
ctx.fillText(txt, 0, 0);
body {background:#7C3939}
<canvas id=c></canvas>
The canvas element can now be placed as needed. In addition you could convert the canvas to an image and use that without the extra overhead.
Rounded Corners
Rounded corners on an element is also expensive and there are no easy way around this - the corners need to be cut one way or another and question is which method is fastest.
Let browser do it using CSS
Overlay the element with the outer corners covered in the same color as background - clunky but can be fast as no clipping is needed. However, more data need to be composited.
Use a mask in canvas directly via globalCompositeOperation. The chances are this would be the slowest method. Performance tests must be made for this scenario to find out which one works best overall.
Make a compromise and remove rounded corners all together.
Links
Also these could be replaced by clickable images. It's a bit more tedious but also these could be made dynamically using a canvas allowing the text to change ad-hoc.
CSS
I would also recommend experimenting with position: fixed; for some of the elements. When fixed is used, some browsers renders that element separately (gives it its own bitmap). This may be more efficient in some cases.
But do make some performance tests to see what combination is the best for your scenario.

getting sprites to work with three.js and different camera types

I've got a question about getting sprites to work with three.js using perspective and orthogonal cameras.
I have a building being rendered in one scene. At one location in the scene all of the levels are stacked on top of each other to give a 3D view of the building and an orthogonal camera is being used to view it. In another part of the scene, I have just the selected level of the building being shown and a perspective camera is being used. The screen is divided between the two views. The idea being the user selects a level from the building view and a more detailed map of that selected level is shown on the other part of the screen.
I played around with sprites for a little bit and as far as I understand it; if the sprite is being viewed with a perspective camera then the sprite's scale property is actual it's size property and if a sprite is being viewed with an orthogonal camera the scale property scales the sprite according to the view port.
I placed the sprite where both cameras can see it and this seems to be the case. If I scale the sprite by 0.5, then the sprite takes up half the orthogonal camera's view port and I can't see it with the perspective camera (presumably because for it, the sprite is 0.5px x 0.5px and is either rounded to 0px (not rendered, or 1px, effectively invisible). If I scale the sprite by say 50, the the perspective camera can see it (presumably because it's a 50px x 50px square) and the orthogonal camera is over taken by the sprite (presumably because it's being scaled by 50 times the view port).
Is my understanding correct?
I ask because in the scene I'm rendering, the building and detailed areas are ~1000 units apart on the x-axis. If I place a sprite somewhere on the detail map I need it to be ~35x35 pixels and when I do this it works fine for the detail view but building view is overtaken. I played with the numbers and it seems that if I scale the sprite by 4, it starts to show up on my building view, even though there's a 1000 unit distance between the views and the sprite isn't visible with the perspective camera.
So. If my understanding is correct then I need to either use separate scenes; have a much bigger gap between views; use the same camera type for both views; or not use sprites.
There are basically two different ways you can use sprites, either with 2D screen coordinates or 3D scene coordinates. Perhaps scene coordinates are what you need? For examples of both, check out the example at:
http://stemkoski.github.io/Three.js/Sprites.html
and in particular, when you zoom in and zoom out in that demo, notice that the sprites in-scene will change size, while the others do not.
Hope this helps!

Drawing 3D in front/behind sprites in XNA/WP7?

This is kind of frustrating me as I've been grizzling over it for a couple of hours now.
Basically I'm drawing 2D sprites through spritebatch and 3D orthographically projected geometry using the BasicEffect class.
My problem is controlling what gets rendered on top of what. At first I thought it would be simply controlling the render order, i.e. if I do:
Draw3DStuff()
SpriteBatch.Begin(...)
Draw2DStuff();
SpriteBatch.End();
It would mean the 2D stuff would render over the 3D stuff, however since I don't control when the device begins/ends renders this isn't the result. The 3D always renders on top of the 2D elements, no matter the projection settings,world translation, the z components of the 3D geometries vertex definitions and the layer depth of the 2D elements.
Is there something I'm not looking into here?
What's the correct way to handle the depth here?
OK I figured it out 2 seconds after posting this question. I don't know if it was coincidence or if StackOverflow has a new feature granting the ability to see future answers.
The Z position of spritebatch elements are between 0 and 1 so they're not directly comparable to the z positions of orthographic geometry being rendered.
When you create an orthographic matrix however you define a near and far clip plane. The Z pos you set should be within this clip plane. I had a hunch that the spritebatch class is effectively drawing quads orthographically so by extension that 0 to 1 would mean 0 was representing a near clip and 1 a far clip, and the depth was probably being rendered into the same place the 3D geometry depth is being rendered to.
Soooo, to make it work I just figured that the near/far clips I was defining for the orthographic render will be measured against the near/far clips of the sprites being rendered, so it was simply a matter of setting the right z value, so for example:
If I have a near clip of 0 and a far clip of 10000 and I wanted to draw it so that it would correspond to 0.5f layer depth and render in front of sprites being drawn at 0.6 and behind sprites being drawn at 0.4 I do:
float zpos = 0.5f;
float orthoGraphicZPos = LinearInterpolate(0, 10000, zpos);
Or just zpos * 10000 :D
I guess it would make more sense to have your orthographic renderers near/far clip to be 0 and 1 to directly compare with the sprites layer depths.
Hopefully my reasoning for this solution was correct (more or less).
As an aside, since you mentioned you had a hunch on how the sprite batch was drawing quads. You can see the source code for all the default/included shaders and the spritebatch class if you are curious, or need help solving a problem like this:
http://create.msdn.com/en-US/education/catalog/sample/stock_effects
The problem is that the spritebatch messes with some of the renderstates that are used when you draw your 3d objects. To fix this you just have to reset them before rendering your 3d objects like so.
GraphicsDevice.BlendState = BlendState.Opaque;
GraphicsDevice.DepthStencilState = DepthStencilState.Default;
GraphicsDevice.SamplerStates[0] = SamplerState.LinearWrap;
Draw3DStuff()
SpriteBatch.Begin(...)
Draw2DStuff();
SpriteBatch.End();
Note that this is for xna 4.0 which I am pretty sure your using anyway. More info can be found on shawn hargreaves blog here. This will draw the reset the render states, draw the 3d objects, then the 2d objects over them. Without resetting the render states you get weird effects like your seeing.

Resources