Is it possible to convert a vector image into Quartz 2D code (mac) so that
image can be drawn programmatically?
Not easily, you would have to write all the code yourself to do this. You might like to have a look at the Opacity image editor, which allows you to generate images and export them as Quartz or Cocoa drawing code.
What kind of vector image?
NSImage loads PDFs the same way it loads bitmaps.
NSImage is Quartz 2D drawing, but if you meant that you need a CGImage, NSImage in 10.6 has a method for getting one. However, CGImage is explicitly bitmap based, unlike NSImage. The parameters you pass to -[NSImage CGImageForRect:context:hints:] will determine how the art is rasterized. It will be rasterized the same way it would be if drawing to the passed rect in the passed context.
You can use Vector code http://www.vectorcodeapp.com to generate core graphics code, which you may use programmatically, or even use to generate postscripts.
Related
I would like to apply a CIFilter to a CGPath. A quick searching around reveals this is fairly straight forward on iOS. What are the options on OS X?
Are the steps,
create a image context,
create CGPath which uses the image context,
apply filter,
draw image into the current graphics context (i.e. for the NSView)?
This seems like a huge amount of boilerplate for a reasonably common task. I just want to check that I have not missed anything!
Core Image operates with image's pixels.
Filters in CoreImage generate CIImage objects and not to change original context. But you can create CIContext to draw to the image context.
You can't apply a filter to the image context directly. But you can create a image from the image context, later you can apply the filter, and blend images together.
I have an app, its a small game using opengles with GLKit.
No im wondering how it works when i want to draw text on
my screen (if it is possible).
How can i do it?
i draw all of my game objects using images (wrapped in some kind
of sprite). its possible to scale, to move, and to rotate.
everything works fine.
but finding out how it works to print text on that glkview
gets me deep inside of problems ^^
I dont want to use uiimages cause i also dont know how
to present uiimages on a glkview.
There are a number of ways to do what you want:
1) Have an image with all the text glyphs you need in it. For example, if your application is in English, you'd have the 26 uppercase and 26 lowercase letters in the image. Upload that texture to the GPU and use the proper texture coordinates or glSubTexImage2d() to pull out the glyphs you need. (It's not clear to me if this is what you meant by not wanting a UIImage. It doesn't have to be a UIImage, though that's probably easiest.)
2) Every time you need to display text, draw it on the CPU on the fly, and upload the entire word, phrase, or sentence as a texture. You could create a CGBitmapContext and use Core Graphics to draw text to it. Then upload it using glTexImage2D().
3) Get the individual glyphs out of the fonts and draw directly using the bezier curves that make up the glyphs. This allows for 3D extrusion, too. However, this option is the most time consuming to code and probably least performant. It also involves dealing with the many small problems that fonts have (like degenerate segments, and incorrect winding orders). IF you want to go down this path, I think maybe Core Text can help.
There are at least two clean ways to do this, depending on your requirements.
While documentation advises against compositing over a CAEAGLLayer (GLKView), it works quite well, at least in recent iOS versions, when transparent content is layered on top of the CAEAGLLayer. For example, try dropping a UITextView, with opaque set to false and a clear background color, on top of a GLKView in your Storyboard in Interface Builder in the Apple GLKit template or your app. In my test on an iPhone 5, frame rendering time remained around 1ms, even while scrolling in the text view. If your text needs are static, or you don't want the user to interact with the text, use CATextLayer as a child layer of your EAGLLayer instead of a view.
The second approach is to render the text into a texture. You can then composite the text onto your view by disabling the depth buffer and rendering the texture on a full screen rectangle. Look at UIGraphicsBeginImageContextWithOptions to see how to render to an offscreen image with Quartz. UIGraphicsGetImageFromCurrentImageContext allows you to retrieve the UIImage to use as a texture.
In my iPad app, I am rendering to an offscreen bitmap, and then drawing the bitmap to the screen. (This is because I want to re-use existing bitmap rendering code.) On the iPad 2, this works like a charm, but on the new iPad with Retina display, drawing the bitmap is really slow, even though its resolution is still the same.
To draw the bitmap, we use the regular Quartz 2D functions: CGImageCreate with a data provider created by CGDataProviderCreateWithData, 32-bit RGBA format with kCGImageAlphaNoneSkipLast. In the UIView that displays the bitmap, in drawRect:, we use CGContextDrawImage to draw it to the context returned by UIGraphicsGetCurrentContext.
Note that I'm not even trying to draw at double resolution: for now I'm fine with the same resolution as I was using on the iPad 2. It looks like CoreGraphics is internally doubling the pixels, and then sending that to the GPU, even though the CGImage that I'm making should be fine for passing to the GPU directly. Any ideas?
It looks like CoreGraphics is internally doubling the pixels, and then sending that to the GPU,
Pretty much. More accurately (in spirit at least):
UIKit makes a CGBitmapContext the size of your view's bounds, in device pixels
It makes that context the current context
You draw your CGImage into that context
... so CG has to rescale the source image, and touch all of the destination pixels
After you're done drawing, UIKit makes a CGImage from the bitmap context
and assigns it to the view's layer's contents.
the CGImage that I'm making should be fine for passing to the GPU directly.
If you want that to happen, you need to tell the system to do that, by cutting out some of the steps above.
(There is no link between UIKit, CoreAnimation, and CoreGraphics that provides a "fast path" like you are expecting.)
The easiest way would be to make a UIImageView, and set its image to a UIImage wrapping your CGImageRef.
Or, set your view.layer.contents to your CGImageRef. (And make sure to not override -drawRect:, not call -setNeedsDisplay, and make sure contentMode is not UIViewContentModeRedraw. Easier to just use UIImageView.)
I have custom property and I override +needsDisplayForKey: to return YES for this property.
My -drawInContext: method is pretty complex while the rectangle affected by my custom property is pretty small. I'd like to optimize it.
The solution I'm thinking about is to implement custom setter where I'll explicitly mark affected rectangle to display. One thing I'm not sure about is that such implementation would be equal to original one and will handle implicit animation. (rdar://11008555)
Will it be equal or there is a better solution? (Target platforms are 10.6+)
There is no way to draw on a portion of a CALayer. Core Animation layers are implemented as textures on OpenGL surfaces. When you implement custom drawing, what essentially happens is that your drawing commands are used to build a bitmap and then that bitmap is set as the texture for the CALayer's OpenGL surface.
What you could do is manage a bitmap cache (using an NSImage or CGContext) for the layer yourself. That way you could just modify the part of the bitmap that requires redrawing, then draw the image to the layer using setContents:.
Does anyone know an easy way to draw arbitrary text in a Cocoa NSOpenGLView? I have a couple of constraints.
The text on screen may change from frame to frame (for example, a framerate display in the corner)
I would like to be able to select any font installed on the system at any size
Have you taken a look at the Cocoa OpenGL sample code? It includes "a texture class for strings, showing how to use an NSImage to write a string into and then texture from for high quality font rendering."