Snapping with CALayers - calayer

I am moving a few CALayers using the -mouseDragged method, and now I would like to "snap" them when they are sufficiently near (or when they are overlapping just a little bit).
Each layer is not a "square": I am drawing different polygons.
I thought that a way to do this is to:
get the position of the layer being moved;
get the overlapping layers or the layers that are near the layer being moved;
On each side of the polygon in which (2) is true, I need to check the maximum distance (at right angles to the side of the layer) between the side of the moving layer and the side of the layer that is "near" (this is a negative value when the layers overlap).
move the layer accordingly.
I don't know if this is a correct approach. The first thing that comes in my mind is:
What happens if I can "snap" in more than one side?
And, even if I try this way, I have no idea what to do regarding (2) and (3).
Is there a better way to do that?

This is not easy. Because CALayers are not vector graphics you have to deal with any possible shape (eg. picture of a dragon.)
Proper collisions are difficult. Instead try hit testing the mouse/touch's position with the shape you want to snap to.
You can do this by checking the transparency of each of the possible snapping layers at the mouse location. See this question for information of how to do this.
More Difficult but Better Results:
Use a 2D physics engine like Chipmunk or Box2D to do your collision detection.

Related

Direct2D: Check if image is outside visible area before drawing?

Is it a reasonable optimization to omit calls to ID2D1HwndRenderTarget::DrawBitmap() if the image will end up outside the visible area? If I implement the checking logic in the application that will cost some performance, so if the first thing D2D does is doing the same check then I'd rather not do it.
I had a test with my application which renders some UI part using Direct2D (and attaching renderdoc), seems it is a bit random.
I render a mix of Rectangles, Text, Path geometries (beziers) and Rectangle with a bitmap brush (which should be equivalent to your DrawBitmap call).
Then I capture a frame with all those objects visible, and another one panning my UI (using transform) so objects are not visible.
From there could check what is drawn or not:
Text is always culled
Solid color rectangles are not culled
Most of the times path geometries are culled, but sometimes not.
Rectangle with bitmap brush are NEVER culled
So it seems Direct2D is making different decisions depending on the types of elements you plan to draw.
Since rectangles are easily batched and cheap to draw, it seems that they are just drawn regardless.
Bitmap rectangles and text require more work, so it seems they are effectively culled.
Path geometry was looking to depend on how many polygon the geometry is tesselated to (I had a path that was translating to 26 primitives and it was not culled, another one translating to 120 and it got culled).
So you can either trust Direct2D that it will perform that optimization, but I would personally implement a quick rectangle to rectangle check just in case (it's not gonna hurt your performances as its an extremely simple operation).

Unity 4.6 Canvas - How to correctly apply 2D physics effects

Is there a way to universally multiply physics2D calculations on the canvas?
I'm trying to make a set of canvas UI elements with 2D physic properties. The objects contain images and text, but need to respond to gravity, impacts, and overlapping collision boxes with other GUI elements.
I've added 2D RigidBody and boxCollider components to my objects. However, they move very slowly. If given a gravity, they fall slowly. If overlapped, they push each other apart slowly.
I've figured out that this is due to the canvas having a very large scale. My objects are effectively 'very big and very far away'.
I can't modify the canvas scale. It needs to be huge or I get render artifacts.
I can't just modify gravity because it doesn't provide a universal fix. Things fall faster, but they don't push apart or spring right.
I can't modify the timestep because it affects the whole world, not just the canvas.
My canvas objects have widths akin to 80, where unity physics expects widths akin to 1. How can I get them to behave like they have a width of 1?
Is there some universal scaling factor for canvas based physics, or am I simply mis-using the canvas for something it is not intended for?
if you are still having this problem, the answer to fixing the movement rates of your scaled-up objects is to scale up your movement forces as well as your gravity. If you can't get certain elements to work right, use a forcepush that you can set to any strength.

Do elements drawn outside the clip plane affect OpenGL performance?

OpenGL Question:I have something to ask about clip space transformation. I am reading an online tutorial and it says that everything you draw outside the clip space will be clipped. When it come to this, does the elements outside the clip space affects the performance or not? Because it will not be drawn and thus it doesn't affect.
Assuming that it will affect performance and in case of 2d game like super mario, I am thinking about not to draw the elements outside the clip space to achieve better performance. Please clarify. Thanks.
OpenGL has only a certain amount of knowledge about your scene and will clip very late in the pipeline. It can't apply a broad phase test. Assuming you can, you should.
Supposing you had a model with 30,000 triangles, OpenGL would transform each and every one of those 30,000 triangles before considering clipping. If you know something as simple as the bounding sphere for the model it's possible you could see that the whole thing is completely outside of the frustum in a single test and save almost 30,000 extra bits of effort.
In a 2d game like Mario what this usually means is using the scroll position to index into the map and to generate geometry only for potentially visible tiles and sprites that are within the visible area.
For the map that will generally just men figuring out the (x, y) of one corner and then generating geometry for the known width and height of the screen so it means discarding the vast majority of the geometry with zero processing.
For the sprites, this is generally why in those sort of games you often see enemies reset to their starting position if you walk a little way from them and then walk back: they're added to the active list based on a map location trigger and removed when you walk far enough away. While not active, no mutable storage is afforded to them.

three.js - Overlapping layers flickering

When several objects overlap on the same plane, they start to flicker. How do I tell the renderer to put one of the objects in front?
I tried to use .renderDepth, but it only works partly -
see example here: http://liveweave.com/ahTdFQ
Both boxes have the same size and it works as intended. I can change which of the boxes is visible by setting .renderDepth. But if one of the boxes is a bit smaller (say 40,50,50) the contacting layers are flickering and the render depth doesn't work anymore.
How to fix that issue?
When .renderDepth() doesn't work, you have to set the depths yourself.
Moving whole meshes around is indeed not really efficient.
What you are looking for are offsets bound to materials:
material.polygonOffset = true;
material.polygonOffsetFactor = -0.1;
should solve your issue. See update here: http://liveweave.com/syC0L4
Use negative factors to display and positive factors to hide.
Try for starters to reduce the far range on your camera. Try with 1000. Generally speaking, you shouldn't be having overlapping faces in your 3d scene, unless they are treated in a VERY specific way (look up the term 'decal textures'/'decals'). So basically, you have to create depth offsets, and perhaps even pre sort the objects when doing this, which all requires pretty low-level tinkering.
If the far range reduction helps, then you're experiencing a lack of precision (depending on the device). Also look up 'z fighting'
UPDATE
Don't overlap planes.
How do I tell the renderer to put one of the objects in front?
You put one object in front of the other :)
For example if you have a camera at 0,0,0 looking at an object at 0,0,10, if you want another object to be behind the first object put it at 0,0,11 it should work.
UPDATE2
What is z-buffering:
http://en.wikipedia.org/wiki/Z-buffering
http://msdn.microsoft.com/en-us/library/bb976071.aspx
Take note of "floating point in range of 0.0 - 1.0".
What is z-fighting:
http://en.wikipedia.org/wiki/Z-fighting
...have similar values in the z-buffer. It is particularly prevalent with
coplanar polygons, where two faces occupy essentially the same space,
with neither in front. Affected pixels are rendered with fragments
from one polygon or the other arbitrarily, in a manner determined by
the precision of the z-buffer.
"The renderer cannot reposition anything."
I think that this is completely untrue. The renderer can reposition everything, and probably does if it's not shadertoy, or some video filter or something. Every time you move your camera the renderer repositions everything (the camera is actually the only thing that DOES NOT MOVE).
It seems that you are missing some crucial concepts here, i'd start with this:
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/
About the depth offset mentioned:
How this would work, say you want to draw a decal on a surface. You can 'draw' another mesh on this surface - by say, projecting a quad onto it. You want to draw a bullet hole over a concrete wall and end up with two coplanar surfaces - the wall, the bullet hole. You can figure out the depth buffer precision, find the smallest value, and then move the bullet hole mesh by that value towards the camera. The object does not get scaled (you're doing this in NDC which you can visualize as a cube and moving planes back and forth in the smallest possible increment), but does translate in depth direction, ending up in front of the other.
I don't see any flicker. The cube movement in 3D seems to be super-smooth. Can you try in a different computer (may be faster one)? I used Chrome on Macbook Pro.

Drawing with transparency in opengl es 2

I'm working on a simple drawing application. My line is constructed using polygons and things look good so far. I would like to add a transparency feature and I used glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); for that reason. However, because my polys sometimes overlap, I get the ugly result shown in the picture (multiple layers of transparency). What I would like to get is the figure in the left(no overlapping because there is no transparency), with an overall transparency.
I guess I could do this by keeping the polys from overlaping, but that would be a overkill for this task I think. There should be a way to control them at drawing time, but being a beginner with OpenGL doesn't help.
I'm sorry, but the way transparency works does not allow you to do what you want without manually keeping the polygons from overlapping. The way that transparency works is that it takes the colour of the surface below it, and uses the blending function you specify in order to calculate the final colour of the pixel.
In your program you are drawing multiple polygons with alpha on top of each other. That means that their colours add up, giving the result you see.
I've never actually written a drawing application, but you could perhaps take a look at triangle strips to draw your lines. They allow you to extend the line point by point, and make sure the geometry won't overlap with itself.

Resources