Scaling drawing done in drawRect when view resizes - cocoa

I'm still learning some of the ins and outs of custom view drawing in Cocoa.
I have a custom view where I draw lines and points based on the corresponding points in a larger rect elsewhere of a fixed size.
I would like to have my drawing scale up or down when the view is resized, but maintain an aspect ratio same as the larger rect.
What is the best way to scale the drawing?
Do I need to somehow apply an affine transform?
Or should I be drawing to an imageRef?
I'm not really sure exactly how to do ether one in this case or how to keep that in sync with the size of the view and the aspect ratio of the larger rect where coordinates come from.
Any tips or links to example code are greatly appreciated.

Concatenating an affine transform sounds like the right solution. Scaling by the same factor in both dimensions will preserve the aspect ratio of your drawing, and you can use simple division to compute the right factor (assuming you aren't just getting it from a slider or something).
If you haven't already, I highly recommend reading the Cocoa Drawing Guide and Quartz 2D Programming Guide. There's a lot of overlap, but the explanations are not copy-and-pasted, so if one guide's explanation of something doesn't make sense, look it up in the other one and try reading that version.

Related

Unity 4.6 Canvas - How to correctly apply 2D physics effects

Is there a way to universally multiply physics2D calculations on the canvas?
I'm trying to make a set of canvas UI elements with 2D physic properties. The objects contain images and text, but need to respond to gravity, impacts, and overlapping collision boxes with other GUI elements.
I've added 2D RigidBody and boxCollider components to my objects. However, they move very slowly. If given a gravity, they fall slowly. If overlapped, they push each other apart slowly.
I've figured out that this is due to the canvas having a very large scale. My objects are effectively 'very big and very far away'.
I can't modify the canvas scale. It needs to be huge or I get render artifacts.
I can't just modify gravity because it doesn't provide a universal fix. Things fall faster, but they don't push apart or spring right.
I can't modify the timestep because it affects the whole world, not just the canvas.
My canvas objects have widths akin to 80, where unity physics expects widths akin to 1. How can I get them to behave like they have a width of 1?
Is there some universal scaling factor for canvas based physics, or am I simply mis-using the canvas for something it is not intended for?
if you are still having this problem, the answer to fixing the movement rates of your scaled-up objects is to scale up your movement forces as well as your gravity. If you can't get certain elements to work right, use a forcepush that you can set to any strength.

how to limit an image to it's real shape

How can I use an image that whenever I want to make a collision in XNA, it happens only for area of the shape not around of it.
For example when I use below picture, I want collision detection happens only when arrow in shape is touched.
The collision detection happens in the area in this picture
How can I make limitation for area of image only?
What you can do is also to create two rectangles. That makes the overlapping area (the area there the image isn't but the rectangle) a bit smaller. But if you need to do this pixel excact you have to use the recource-expensive per-pixel-collision.
You shouldn't try restricting the image shape, because regardless of your efforts - you will have a rectangle. What you need to do is work with detecting pixel collisions. It is a fairly extensive topic - you can read more about a Windows Phone-specific XNA implementation here.

Google Maps-style quad-tree of materials on a single plane in Three.js – 1x1, 2x2, 4x4 and 8x8

I'm trying and failing to work out how to achieve a quad-tree of materials (images) on a single plane, much like a Google Maps-style zoomable tile that gets more accurate the closer you get.
In short, I want to be able to have a 1x1 image texture (covering a plane that is 256 units wide and tall) that can then be replaced with a 2x2 texture, that can then be replaced with a 4x4 texture, and so on.
Like the image example below…
Ideally, I want to avoid having to create a different plane for each zoom level / number of segments. A perfect solution would allow me to break a single plane into 8x8 segments (highest zoom) and update the number of textures on the fly. So it would start with a 1x1 texture across all 64 (8x8) segments, then change into a 2x2 texture with each texture covering 4x4 segments, and so on.
Unfortunately, I can't work out how to do this. I explored setting the materialIndex for each face but you aren't able to update those after the first render so that wouldn't work. I've tried looking into UV coordinates but I don't understand how it would work in this situation, nor how to actually implement that in Three.js – there is little in the way of documentation / examples for this specific case.
A vertex shader is another option that came up in research, but again I don't know enough to understand how to construct that.
I'd appreciate any and all help with this, it will be a technique that proves valuable for other Three.js users I'm sure.
Not 100% sure what you are trying to do, whether you are talking about texture atlasing (looking up and different textures based on current setting/zooms) but if you are looking for quad-tree based texturing that increases in detail as you zoom in then this is essentially what mipmaping is and does.
(It can be also be used to do all sorts of weird things because of that, but that's another adventure entirely)
Generally mipmapping is automatic based on the filtering you use - however it sounds like you need more control over it.
I created an example hidden away in the three.js source tree which may help:
http://mrdoob.github.com/three.js/examples/webgl_materials_texture_manualmipmap.html
Which shows you how to load each mipmap level in manually, rather than have it just be automatically generated.
HTH

2D Drawing on Transparent NSWindow

I have a transparent window and want to do 2D drawing in it. I'm considering two options :
Quartz 2D
OpenGL
As I have no experience with Quartz 2D at all, I'm wondering : would it give me better performance? My scene is made out of lines, circles and squares.
It depends if your scene is dynamic, I would use openGL which will have better performance. Using Quartz 2D could be much more easier in terms of code to write. But if you need to refresh your window a lot of time that would cost you.
An other option would be to use both through CALayer. These layers are in fact using openGL for rendering faster. So you can draw inside using Quartz 2D (CAShapeLayer) and then you manipulate the layer to change dynamically your scene. Please bear in mind that if you upscale your layer you'll have artifacts. So, using this technic will give you a Maximum layer size.
I hope I've been clear enough and helpful.

How do you mask an arbitrary area of an image to overlay another image?

I want to mask an arbitrary convex polygon area of an image and put another image into that area. I found this posting, but is wasn't clear to me if this applies only to rectangular areas and not arbitrary polygons.
The basic flow I am talking about is to have an (x,y) coordinate on the screen which would serve to be the center of my polygon (center in terms of an arbitrary point which is consistent for me). I would like to mask this area where the new image (polygonal in nature) would be displayed while leaving the rest of the screen as is.
Can I do this easily and quickly?
You have to use stencil buffer. It's basically another type of buffer that has plethora of awesome applications and one of the simplest one is masking. While I can't recommend any OpenGL ES specific tutorial off the top of my head, I highly recommend reading general tutorials, since it's not that different and surely is fascinating.
Try glScissor... it might be the rectangle you want.

Resources