I have an app, its a small game using opengles with GLKit.
No im wondering how it works when i want to draw text on
my screen (if it is possible).
How can i do it?
i draw all of my game objects using images (wrapped in some kind
of sprite). its possible to scale, to move, and to rotate.
everything works fine.
but finding out how it works to print text on that glkview
gets me deep inside of problems ^^
I dont want to use uiimages cause i also dont know how
to present uiimages on a glkview.
There are a number of ways to do what you want:
1) Have an image with all the text glyphs you need in it. For example, if your application is in English, you'd have the 26 uppercase and 26 lowercase letters in the image. Upload that texture to the GPU and use the proper texture coordinates or glSubTexImage2d() to pull out the glyphs you need. (It's not clear to me if this is what you meant by not wanting a UIImage. It doesn't have to be a UIImage, though that's probably easiest.)
2) Every time you need to display text, draw it on the CPU on the fly, and upload the entire word, phrase, or sentence as a texture. You could create a CGBitmapContext and use Core Graphics to draw text to it. Then upload it using glTexImage2D().
3) Get the individual glyphs out of the fonts and draw directly using the bezier curves that make up the glyphs. This allows for 3D extrusion, too. However, this option is the most time consuming to code and probably least performant. It also involves dealing with the many small problems that fonts have (like degenerate segments, and incorrect winding orders). IF you want to go down this path, I think maybe Core Text can help.
There are at least two clean ways to do this, depending on your requirements.
While documentation advises against compositing over a CAEAGLLayer (GLKView), it works quite well, at least in recent iOS versions, when transparent content is layered on top of the CAEAGLLayer. For example, try dropping a UITextView, with opaque set to false and a clear background color, on top of a GLKView in your Storyboard in Interface Builder in the Apple GLKit template or your app. In my test on an iPhone 5, frame rendering time remained around 1ms, even while scrolling in the text view. If your text needs are static, or you don't want the user to interact with the text, use CATextLayer as a child layer of your EAGLLayer instead of a view.
The second approach is to render the text into a texture. You can then composite the text onto your view by disabling the depth buffer and rendering the texture on a full screen rectangle. Look at UIGraphicsBeginImageContextWithOptions to see how to render to an offscreen image with Quartz. UIGraphicsGetImageFromCurrentImageContext allows you to retrieve the UIImage to use as a texture.
Related
I want to render some graphics, texts and shapes to a bitmap using a canvas-like API like nanovega in D on my server.
I know how to create an arsd window with OpenGL context and render to it, (as per documentation) but is it also possible rendering to a headless context or even directly to draw to some memory buffer? (Because I don't think my server has OpenGL available and would probably need to use some software renderer)
Mainly I wanted to use gradients, texts, shapes, rounded corners, images and masks and render all of that to an image. I know that nanovega implements all of the rendering parts of that, so I would like to keep using it.
I have built so far an application that allows the user to drag and drop images onto a NSImageView. However, I want to be able to move these images by simply clicking on any image and hold down the mouse button to move it's location.
How can I manipulate NSImageView to translate/scale after setting the images down? Is that possible? I've read about the NSAffineTransform, but it seems like that is moving the images before creating the image itself. I already have the images on the canvas, and simply want to click and hold the image and move it with my mouse. Please help anyone!
There are two sides to this.
NSImage is the model object, which you might want to display in different ways, save to disk/archive, etc. If you want to actually change the model (scaling, rotating, etc.), implying a permanent change, then you are going to probably want to look at NSAffineTransform, Quartz drawing, etc.
But you probably didn't mean that. Instead you probably are interested in NSImageView, which is a view object, displaying the contents of the NSImage model object using whatever display attributes are desired. If you only want to change how an image is displayed, not what the actual bytes in the image are, then you are going to manipulate the NSImageView at run-time. You can use NSAffineTransform here as well, but it's somewhat uncommon (and usually unnecessary).
The key thing to note that is the NSImageView inherits from NSView, so you have all its power at your disposal. Take a look at certain methods, such as:
-setFrameSize: - useful for changing the view size, and thus the image display scale
-setFrameOrigin: - useful for changing the view position, and thus the apparent image position
Note again that these have nothing to do with images per se, and apply to all Cocoa views. You may want to take a look at a book like Cocoa Programming for Mac OS X to get you past the basics. (You can then do more interesting things, like rotation, animation, etc.)
I have a view designed for printing which includes a watermark, a transparent view which draws some text atop the other content.
When printing and using the Mac OS Save as PDF feature, the watermark text is selectable. Sometimes this interferes with selecting the other content, other times it's just distracting.
How can I make the text not selectable in the generated PDF?
I tried drawing the watermark behind the other content instead of in front. It didn't prevent selecting the watermark, but kept it out of the way of the other content. However, the table view rows occluded the watermark, which of course is worse.
Commenter asked for code, so here's some code which prepares the view:
// self.view is the print view
// watermark is an instance of WatermarkBackground, an NSView
if (watermark) {
watermark.frame = self.view.frame;
[self.view addSubview:watermark positioned:NSWindowAbove relativeTo:nil];
}
And the line in [WatermarkBackground drawRect] which does the drawing:
// _message is an NSString
// textAttributes returns a dictionary with a color and font
[_message drawWithRect:textRect
options:NSLineBreakByWordWrapping
attributes:[WatermarkBackground textAttributes]];
I meant to post this screenshot originally:
One option would be to create one or multiple CGPaths from your string and draw those into the PDF instead. One way to do so would be to use CTFontCreatePathForGlyph, but it's actually quite a lot of work to do this for entire strings, Core Text does help, but it's a pretty low-level framework.
If you're always drawing the same watermark, it would be much easier to create a static PDF in some vector graphics app and use that with CGPDFPageDraw etc. Illustrator has a "Convert to Paths" command for text objects.
As far as I know, there in no way to make text unselectable in PDF. Probably the best solution would be to use an image watermark instead.
However, if it is in front of text, it can make background text difficult to select. If it is behind everything, there will be same issues with obscuring it with tables. So, possibly a better plan of action would be not to try to make text unselectable, but rather make table background transparent. Then, use image watermark.
Taking an idea from omz, instead of using CGPaths and generating them on the fly, the simplest, most elegant solution would be this:
Create a vector watermark by typing the text in a vector editor and expanding text to create outlines.
Save it as SVG or PDF.
Then, put this new vector graphic on top as a watermark. It will not be selectable, will not obscure the view, and will not be obscured by tables.
I can draw rich-text with Core Text, the problem is placing images flowing with the text.
(iOS SDK 4.1)
I'm try to drawing some kind of rich-text. Problem is designer placed many icons among text. So the text what I have to draw is something like this:
Here is a word <an icon image>, and another words.
The image(<another icon>) should be placed like a glyph.
It's part of text, not an example.
<icon> are images. (This is not a code. Just an illustration.)
I can draw this by laying out all of them manually, but it's too hard keeping complex text layout behaviors. So I'm finding a way to draw this with Core Text.
I got solution.
The key of laying out non-text content is CTRunDelegate.
Core Text does not support non-text content, so you have to make blank spaces for them, and draw or place them yourself later.
A part of NSAttributedString attributed with kCTRunDelegateAttributeName will call registered callback to determine width of each glyph. This will let you make blank space for each non-text object.
However, after drawing the text with Core Text, the layout information stored with frame/line/run will invalidated. So you have to draw/place non-text contents after layout with framesetter/typesetter, but before drawing.
This link describes basic usage of CTRunDelegate:
How to use CTRunDelegate in iPad?
There is a problem with Core Text. Originally, CTRunDelegate designed to support variable width and vertical alignment via CTRunDelegateCallbacks.getAscent and CTRunDelegateCallbacks.getDescent. But vertical alignment feature doesn't work currently. This might be a bug.
I described this problem here:
Aligning multiple sized text vertical center instead of baseline with Core Text in iOS
If you have informations about this problem, please see my question at the link.
You simply set a delegate for a given CTRun and the delegate object is responsible to let know Core Text what is the CTRun ascent space, descent space and width.
When Core Text "reaches" a CTRun which has a CTRunDelegate it asks the delegate - how much width should I leave for this chunk of data, how high should it be? This way you build a hole in the text - then you draw your image in that very spot.
Here is a blog about Core Text.It has the answer for you .
How To Create a Simple Magazine App with Core Text
Does anyone know an easy way to draw arbitrary text in a Cocoa NSOpenGLView? I have a couple of constraints.
The text on screen may change from frame to frame (for example, a framerate display in the corner)
I would like to be able to select any font installed on the system at any size
Have you taken a look at the Cocoa OpenGL sample code? It includes "a texture class for strings, showing how to use an NSImage to write a string into and then texture from for high quality font rendering."