I am drawing a rounded image in the centre of my custom view by creating a rounded NSBezierPath and adding it to the clipping region of the graphics context. This is working well, but I'm wondering if I can improve the performance of drawing the background of the view (the area outside of the rounded centred image) but inverting this clip and performing the background draw (an NSGradient fill).
Can anyone suggest a method of inverting a clip path?
You won't improve the drawing performance of the background by doing this. If anything, using a more complex clipping path will slow down drawing.
If you want to draw objects over a background without significant overhead of redrawing the background, you could use Core Animation layers as I explained in my answer to this question.
Regardless of performance, this question comes first when you try to invert clip.
So if that's what you're looking for:
Using NSBezierPath addClip - How to invert clip
Related
I am using pixel colour inspection to detect collisions. I know there are other ways to achieve this but this is my use case.
I draw a shape cloned from the main canvas on to a second canvas switching the fill and stroke colours to pur black. I then use getImageData() to get an array of pixel colours and inspect them - if I see black I have a collision with something.
However, some pixels are shades of grey because the second canvas is applying antialiasing to the shape. I want only black or transparent pixels.
How can I get the second canvas to be composed of either transparent or black only?
I have achieved this long in the past with Windows GDI via compositing/xor combinations etc. However, GDI did not always apply antialiasing. I guess the answer lies in globalCompositeOperation or filter but I cannot see what settings/filters or sequence to apply.
I appreciate I have not provided sample code but I am hoping that someone can throw me a bone and I'll work up a snippet here which might become a standard cut & paste for posterity from that.
I have an image that is generated as an NSImage, and then I draw it into my NSView subclass, using a custom draw() method.
I want to modify this custom view so that the image is drawn in the same place, but it fades out on the side. That is, it's drawn as a linear gradient, from alpha=1.0 to alpha=0.0.
My best guess is one of the draw() variants with NSCompositingOperation might help me do what I want, but I'm having trouble understanding how they could do this. I'm not a graphics expert, and the NSImage and NSCompositingOperation docs seem to be using different terminology.
The quick version: pretty much this question but on macOS instead of Android.
You're on the right track. You'll want to use NSCompositingOperationDestinationOut to achieve this. That will effectively punch out the destination based on the alpha of the source being drawn. So if you first draw your image and then draw a gradient from alpha 0.0 to 1.0 with the .destinationOut operation on top, you'll end up with your image with alpha 1.0 to 0.0
Because that punch out happens to whatever is already in the backing store where the gradient is being drawn to, you'll want to be careful where/how you use it.
If you want to do this all within the drawing of the view, you should do the drawing of your image and the punch out gradient within a transparency layer using CGContextBeginTransparencyLayer and CGContextEndTransparencyLayer to prevent punching out anything else.
You could also first create a new NSImage to represent this faded variant, either using the drawingHandler constructor or NSCustomImageRep and doing the same image & gradient drawing within there (without worrying about transparency layers). And then draw that image into your view, or simply use that image with an NSImageView.
I need to add this classic effect which consist in highlighting a 3D model by stroking the outlines, just like this for example (without the transparent gradiant, just a solid stroke) :
I found a way to do this here which seems pretty simple and easy to implement. The guy is playing with the stencil buffer to compute the model shape, then he's drawing the model using wireframes and the thickness of the lines is doing the job.
This is my problem, the wireframes. I'm using OpenGL ES 2.0, which means I can't use glPolygonMode to change the render mode to GL_LINE.
And I'm stuck here, I can't find any simple alternative way to do it, the most relevant solution i found for the moment is to implement the wireframe rendering myself, which is clearly not the easiest solution. To draw my objects I'm using glDrawElements with GL_TRIANGLES as primitive, I tried to use GL_TRIANGLE_STRIP as primitive but the result is definetely not the right one.
Any idea/trick to bypass the lack of glPolygonMode with OpenGL ES? Thanks in advance.
Drawing Outline or border for a Model in OpenGL ES 2 is not straight forward as the example you have mentioned.
Method 1:
The easiest way is to do it in multiple passes.
Step 1 (Shape Pass): Render only the object and draw it in black using the same camera settings. And draw all other pixels with different color.
Step 2 (Render Pass): This is the usual Render pass, where you actually draw the objects in real color. This every time you a fragment, you have to test the color at the same pixel on the ShapePass image to see if any of the nearby 8 pixels are different in color. If all nearby pixels are of same color, then the fragment does not represent a border, else add some color to draw the border.
Method 2: There are other techniques that can give you similar effects in a single pass. You can draw the same object twice, first time slightly scaled up with a single color, and then with real color.
How can I use an image that whenever I want to make a collision in XNA, it happens only for area of the shape not around of it.
For example when I use below picture, I want collision detection happens only when arrow in shape is touched.
The collision detection happens in the area in this picture
How can I make limitation for area of image only?
What you can do is also to create two rectangles. That makes the overlapping area (the area there the image isn't but the rectangle) a bit smaller. But if you need to do this pixel excact you have to use the recource-expensive per-pixel-collision.
You shouldn't try restricting the image shape, because regardless of your efforts - you will have a rectangle. What you need to do is work with detecting pixel collisions. It is a fairly extensive topic - you can read more about a Windows Phone-specific XNA implementation here.
I couldn't post the image, but I use the "CGContextDrawRadialGradient" method to draw a shaded blue ball (~40 pixel diameter), it's shadow and to make a "pulsing" white ring around the ball (inner and outer gradients on the ring). The ring starts at the edges of the blue ball and expands outward (radius grows with a timer). The white ring fades as it expands outward like a radio wave.
Looks great running in the simulator but runs incredibly slow on the iPhone 4. The ring should pulse in about a second (as in simulator), but takes 15-20 seconds on the phone. I have been reading a little about CALayer, CGLayer and reading some segments on a some gradient animation, but it isn't clear what I should be using for best performance.
How do I speed this up. Should I put the ball on a layer and the expanding ring on another layer? If so, how do I know which layer to update on a drawrect?
Appreciate any guidance. Thanks.
The only way to speed something like that up is to pre-render it. Determine how many image frames you need to make it look good and then draw each frame into a context you created with CGBitmapContextCreate and capture the image using CGBitmapContextCreateImage. Probably the easiest way to animate the images would be to set the animationImages property of a UIImageView (although there are other options, like CALayer animations).
The newest Apple docs finally mention which pixel formats are supported in iOS so make sure to reference those when creating your bitmap context.