Can't specify fill style for CTLineDraw - cocoa

I'm quite new to OS-X/Cocoa development, and I'm having some trouble with CTDrawLine. I can happily change the stroke of the text using CGContextSetStrokeXXX variants (I can even use patterns) when drawing with kCGTextStroke, but when I set the mode to kCGTextFill, the fill style of the current context is ignored, and the text is rendered black. Here's some code that demonstrates the problem.
-(void) drawRect: (NSRect)rect {
CGContextRef context = (CGContextRef)[[NSGraphicsContext currentContext] graphicsPort];
float x = 50;
float y = 50;
CTFontRef font = CTFontCreateWithName(CFSTR("Arial"), 74, NULL);
CFStringRef stringRef = CFStringCreateWithCString(NULL, "Hello SO", kCFStringEncodingUTF8);
CFStringRef keys[] = { kCTFontAttributeName };
CFTypeRef values[] = { font };
CFDictionaryRef attributes = CFDictionaryCreate(kCFAllocatorDefault, (const void**)&keys,
(const void**)&values, sizeof(keys)/sizeof(keys[0]),
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFAttributedStringRef attrString = CFAttributedStringCreate(kCFAllocatorDefault, stringRef, attributes);
CTLineRef line = CTLineCreateWithAttributedString(attrString);
CGContextSetTextPosition(context, x, y);
CGContextSetTextDrawingMode(context, kCGTextFill);
// why doesn't this work?
CGContextSetRGBFillColor(context, 1, 0, 0, 1);
CGContextFillRect(context, CGRectMake(0, 0, 200, 200));
CTLineDraw(line, context);
CFRelease(stringRef);
CFRelease(attributes);
CFRelease(line);
CFRelease(attrString);
CFRelease(font);
}
I expect the text to be red, with a red square beneath it (the text would not show up against this background). Instead I see a red square with black text.
The incredibly detailed documentation for CTDrawLine says:
CTLineDraw
Draws a complete line.
There are no obvious alternative routes to specify the fill style in the reference for CoreText or Quartz. The only way I can get the desired text fill style is by using kCGTextClip and filling a rectangle over the area, which is stupid.
I'm testing on OS-X 10.9 and building with XCode 5, with C++11 language extensions, and this is in a .mm file (although I doubt this would be causing the issue).
What am I missing?
Addendum:
It's probably worth noting my desired end result is to be able to render this text filled with patterns as well as solid colours.

After a bit more fiddling around I found you must add the kCTForegroundColorFromContextAttributeName, https://developer.apple.com/library/ios/documentation/Carbon/Reference/CoreText_StringAttributes_Ref/Reference/reference.html
CFStringRef keys[] = { kCTFontAttributeName, kCTForegroundColorFromContextAttributeName };
CFTypeRef values[] = { font, kCFBooleanTrue };

Related

How to merge two images in Xamarin Android and position it the same way like CGRect for iOS?

I successfully get out the two images in my resources folder in my project like this.
string BackGroundImage = "background_image";
string ObjectImage = "object_image";
var TheBackGroundImage = BitmapFactory.DecodeResource(Resources, Resources.GetIdentifier(BackGroundImage, "drawable", PackageName));
var TheObjectImage = BitmapFactory.DecodeResource(Resources, Resources.GetIdentifier(ObjectImage, "mipmap", PackageName));
What i have done after is very the tricky part comes in and I do not know how to quite get it right. What i try to do is create a new Bitmap where the BackgroundImage is the base. Then i create a canvas with my second image (ObjectImage) that is the image that will be on top of the BackgroundImage and try to merge it all together.
Bitmap Result = Bitmap.CreateBitmap(TheBackGroundImage.Width, TheBackGroundImage.Height, TheBackGroundImage.GetConfig());
Canvas canvas = new Canvas(Result);
canvas.DrawBitmap(ObjectImage, new Matrix(), null);
canvas.DrawBitmap(ObjectImage, 79, 79, null);
This does not work as anticipated, is canvas the way to go or is there somethinig else i should look at?
If we look at my iOS solution then i do it like this:
UIImage PhoneImage = UIImage.FromFile(PhonePath);
UIImage IconImage = UIImage.FromFile(IconPath);
UIImage ResultImage;
CGSize PhoneSize = PhoneImage.Size;
CGSize IconSize = IconImage.Size;
UIGraphics.BeginImageContextWithOptions(IconSize, false, IconImage.CurrentScale); //UIGraphics.BeginImageContextWithOptions(IconSize, false, IconImage.CurrentScale);
UIGraphics.BeginImageContext(PhoneSize);
CGRect Frame = (new CoreGraphics.CGRect(25, 29.5, 79, 79));
UIBezierPath RoundImageCorner = new UIBezierPath();
RoundImageCorner = UIBezierPath.FromRoundedRect(Frame, cornerRadius: 15);
PhoneImage.Draw(PhoneImage.AccessibilityActivationPoint);
RoundImageCorner.AddClip();
IconImage.Draw(Frame);
UIColor.LightGray.SetStroke();
RoundImageCorner.LineWidth = 2;
RoundImageCorner.Stroke();
ResultImage = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
var documentsDirectory = Environment.GetFolderPath(Environment.SpecialFolder.Personal);
string jpgFilename = System.IO.Path.Combine(documentsDirectory, "app.png");
NSData image = ResultImage.AsPNG();
And it works beautifully with a border around my second image as well.
How can i adjust my code to successfully merge two images together and position the second image preferably like a CGRect?
Try this:
public Bitmap mergeBitmap(Bitmap backBitmap, Bitmap frontBitmap)
{
Bitmap bitmap = backBitmap.Copy(Bitmap.Config.Argb8888, true);
Canvas canvas = new Canvas(bitmap);
Rect baseRect = new Rect(0, 0, backBitmap.Width, backBitmap.Height);
Rect frontRect = new Rect(0, 0, frontBitmap.Width, frontBitmap.Height);
canvas.DrawBitmap(frontBitmap, frontRect, baseRect, null);
return bitmap;
}
Update:
Here is the DrawBitmap method's introduce. I add annotations in the method.
public Bitmap mergeBitmap(Bitmap backBitmap, Bitmap frontBitmap)
{
Bitmap bitmap = backBitmap.Copy(Bitmap.Config.Argb8888, true);
Canvas canvas = new Canvas(bitmap);
//this Rect will decide which part of your frontBitmap will be drawn,
//(0,0,frontBitmap.Width, frontBitmap.Height) means that the whole of frontBitmap will be drawn,
//(0,0,frontBitmap.Width/2, frontBitmap.Height/2) means that the half of frontBitmap will be drawn.
Rect frontRect = new Rect(0, 0, frontBitmap.Width, frontBitmap.Height);
//this Rect will decide where the frontBitmap will be drawn on the backBitmap,
//(200, 200, 200+ frontBitmap.Width, 200+frontBitmap.Height) means that
//the fontBitmap will drawn into the Rect which left is 200, top is 200, and its width and
//height are your frontBitmap's width and height.
//I suggest the baseRect's width and height should be your fontBitmap's width and height,
//or, your fontBitmap will be stretched or shrunk.
Rect baseRect = new Rect(200, 200, 200+ frontBitmap.Width, 200+frontBitmap.Height);
canvas.DrawBitmap(frontBitmap, frontRect, baseRect, null);
return bitmap;
}

opengl es 2.0 object picking on ios using color coding

I have read a lot of tutorials about using color coding to achieve 3D Object picking on iOS. But I'm not sure how to do it. Anyone who can get me a demo written by Objective-C .
The related issues just like this:
OpenGL ES 2.0 Object Picking on iOS (Using Color Coding)
many thanks.
luo
I have achieved picking object in OpenGL ES scene using snapshot. Here is the key code:
-(GLKVector4)pickAtX:(GLuint)x Y:(GLuint)y {
GLKView *glkView = (GLKView*)[self view];
UIImage *snapshot = [glkView snapshot];
GLKVector4 objColor = [snapshot pickPixelAtX:x Y:y];
return objColor;
}
And, then in your tapGesture method, you just need to add this:
const CGPoint loc = [recognizer locationInView:[self view]];
GLKVector4 objColor = [self pickAtX:loc.x Y:loc.y];
if (GLKVector4AllEqualToVector4(objColor, GLKVector4Make(1.0f, 0.0f, 0.0f, 1.0f)))
{
//do something.......
}
of course, you should add this :
#implementation UIImage (NDBExtensions)
- (GLKVector4)pickPixelAtX:(NSUInteger)x Y:(NSUInteger)y {
CGImageRef cgImage = [self CGImage];
size_t width = CGImageGetWidth(cgImage);
size_t height = CGImageGetHeight(cgImage);
if ((x > width) || (y > height))
{
GLKVector4 baseColor = GLKVector4Make(0.0f, 0.0f, 0.0f, 1.0f);
return baseColor;
}
CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
CFDataRef bitmapData = CGDataProviderCopyData(provider);
const UInt8* data = CFDataGetBytePtr(bitmapData);
size_t offset = ((width * y) + x) * 4;
//UInt8 b = data[offset+0];
float b = data[offset+0];
float g = data[offset+1];
float r = data[offset+2];
float a = data[offset+3];
CFRelease(bitmapData);
NSLog(#"R:%f G:%f B:%f A:%f", r, g, b, a);
GLKVector4 objColor = GLKVector4Make(r/255.0f, g/255.0f, b/255.0f, a/255.0f);
return objColor;
}
it is useful to achieve object picking in OpenGL ES scene.
I had been achieve 3D Object picking use color coding, if anybody want the demo, please call me and tell me your e-mail.

How to choose the right buffer to draw and stop it from swapping continuously

Please tell me if the question's vogue, I need the answe as soon as possible
for more information about the problem you can refer to this. I just didn't understand how to manage buffers properly.
A red rectangle drawn on 2D texture disappears right after being drawn
Being in the last stages of customizing the class COpenGLControl:
I have created two instances of the class in my MFC Dialog:
whenever the extent of zoom is changed in the bigger window, it is drawn as a red rectangle on the smaller window. which is always in full extent mode. In order to stablish such a relation between two instances, I have used the concept of user defined messages and send the message to the parent of the class.
The main trouble
based on the information above:
1- when I pan in the bigger window (mean I cause the user defined message be sent rapidly and m_oglWindow2.DrawRectangleOnTopOfTexture() be called rapidly I see a track of red rectangles shown but immediately disappeared in the smaller window
2- CPU-usage immediately get's high when panning from 3% to 25%
3- In other navigation tasks liked Fixed zoom in,Fixed zoom out,Pan and etc a red rectangle flashes and then immediately disappears, I mean it seems that the red rectangle is there just when the control is in the function m_oglWindow2.DrawRectangleOnTopOfTexture() but I want the rectangle be there until the next call off m_oglWindow2.DrawRectangleOnTopOfTexture()
4- making calls to glDrawBuffer(GL_FRONT_AND_BACK) and glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) causes the texture in the smaller window to get off and on even if the mouse is idle
I know the problem is most because of the lines glClear, glDrawBuffer, SwapBuffers in the following codes. But I don't know exactly how to solve
void COpenGLControl::OnTimer(UINT nIDEvent)
{
wglMakeCurrent(hdc,hrc);
switch (nIDEvent)
{
case 1:
{
// Clear color and depth buffer bits
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Draw OpenGL scene
oglDrawScene();
// Swap buffers
SwapBuffers(hdc);
break;
}
default:
break;
}
CWnd::OnTimer(nIDEvent);
wglMakeCurrent(NULL, NULL);
}
void COpenGLControl::DrawRectangleOnTopOfTexture()
{
wglMakeCurrent(hdc, hrc);
//glDrawBuffer(GL_FRONT_AND_BACK);
//glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glPushAttrib(GL_ENABLE_BIT|GL_CURRENT_BIT);
glDisable(target);
glColor3f(1.0f,0.0f,0.0f);
glBegin(GL_LINE_LOOP);
glVertex2f(RectangleToDraw.at(0),RectangleToDraw.at(1));
glVertex2f(RectangleToDraw.at(0),RectangleToDraw.at(3));
glVertex2f(RectangleToDraw.at(2),RectangleToDraw.at(3));
glVertex2f(RectangleToDraw.at(2),RectangleToDraw.at(1));
glEnd();
glPopAttrib();
SwapBuffers(hdc);
wglMakeCurrent(NULL, NULL);
}
void COpenGLControl::OnDraw(CDC *pDC)
{
// TODO: Camera controls
wglMakeCurrent(hdc,hrc);
glLoadIdentity();
gluLookAt(0,0,1,0,0,0,0,1,0);
glTranslatef(m_fPosX, m_fPosY, 0.0f);
glScalef(m_fZoom,m_fZoom,1.0);
wglMakeCurrent(NULL, NULL);
}
Remember:
OnDraw function is just called twice in the smaller window first when initializing the window and second when calling m_oglWindow2.ZoomToFullExtent() and then for each call of OnDraw in the bigger window, there's a call to the DrawRectangleOnTopOfTexture() in the smaller one but this function DrawRectangleOnTopOfTexture() is never called in the bigger window
It'll be favor of you if:
you correct my code
introduce me an excellent tutorial on how to use buffers in multiple
drawings that can not be done in a single function or a single thread (An excellent tutorial about buffers eg.color buffers and etc in opengl
------------------------------------------------------------------------------------------ -----------------------------------------------------------------------------------------
I just added explanations below to provide further information about how's the class is working if it's needed. If you think it's bothering viewers just edit it to remove which of the parts that you feel is not required. But please do help me.
the oglInitialize sets initial parameters for the scene:
void COpenGLControl::oglInitialize(void)
{
// Initial Setup:
//
static PIXELFORMATDESCRIPTOR pfd =
{
sizeof(PIXELFORMATDESCRIPTOR),
1,
PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER,
PFD_TYPE_RGBA,
32, // bit depth
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
24, // z-buffer depth
8,0,PFD_MAIN_PLANE, 0, 0, 0, 0,
};
// Get device context only once.
hdc = GetDC()->m_hDC;
// Pixel format.
m_nPixelFormat = ChoosePixelFormat(hdc, &pfd);
SetPixelFormat(hdc, m_nPixelFormat, &pfd);
// Create the OpenGL Rendering Context.
hrc = wglCreateContext(hdc);
wglMakeCurrent(hdc, hrc);
// Basic Setup:
//
// Set color to use when clearing the background.
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClearDepth(1.0f);
// Turn on backface culling
glFrontFace(GL_CCW);
glCullFace(GL_BACK);
// Turn on depth testing
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
// Send draw request
OnDraw(NULL);
wglMakeCurrent(NULL, NULL);
}
example of a navigation task:
PAN:
void COpenGLControl::OnMouseMove(UINT nFlags, CPoint point)
{
// TODO: Add your message handler code here and/or call default
if (WantToPan)
{
if (m_fLastX < 0.0f && m_fLastY < 0.0f)
{
m_fLastX = (float)point.x;
m_fLastY = (float)point.y;
}
int diffX = (int)(point.x - m_fLastX);
int diffY = (int)(point.y - m_fLastY);
m_fLastX = (float)point.x;
m_fLastY = (float)point.y;
if (nFlags & MK_MBUTTON)
{
m_fPosX += (float)0.05f*m_fZoomInverse*diffX;
m_fPosY -= (float)0.05f*m_fZoomInverse*diffY;
}
if (WantToSetViewRectangle)
setViewRectangle();
OnDraw(NULL);
}
CWnd::OnMouseMove(nFlags, point);
}
the most important part: before calling OnDraw in each of the navigation functions if the client-programmer has set WantToSetViewRectangle as true means that he wants the view rectangle for the window to be calculated and calls setViewRectangle() which is as follows. It sends a message to the parent in case of an update for ViewRectangle:
void COpenGLControl::setViewRectangle()
{
CWnd *pParentOfClass = CWnd::GetParent();
ViewRectangle.at(0) = -m_fPosX - oglWindowWidth*m_fZoomInverse/2;
ViewRectangle.at(1) = -m_fPosY - oglWindowHeight*m_fZoomInverse/2;
ViewRectangle.at(2) = -m_fPosX + oglWindowWidth*m_fZoomInverse/2;
ViewRectangle.at(3) = -m_fPosY + oglWindowHeight*m_fZoomInverse/2;
bool is_equal = ViewRectangle == LastViewRectangle;
if (!is_equal)
pParentOfClass ->SendMessage(WM_RECTANGLECHANGED,0,0);
LastViewRectangle.at(0) = ViewRectangle.at(0);
LastViewRectangle.at(1) = ViewRectangle.at(1);
LastViewRectangle.at(2) = ViewRectangle.at(2);
LastViewRectangle.at(3) = ViewRectangle.at(3);
}
this is how we use the class in the client code:
MyOpenGLTestDlg.h
two instances of class:
COpenGLControl m_oglWindow;
COpenGLControl m_oglWindow2;
MyOpenGLTestDlg.cpp
apply texture on the windows and set both of them to full extent in the OnInitDlg
m_oglWindow.pImage = m_files.pRasterData;
m_oglWindow.setImageWidthHeightType(m_files.RasterXSize,m_files.RasterYSize,m_files.eType);
m_oglWindow.m_unpTimer = m_oglWindow.SetTimer(1, 1, 0);
m_oglWindow2.pImage = m_files.pRasterData;
m_oglWindow2.setImageWidthHeightType(m_files.RasterXSize,m_files.RasterYSize,m_files.eType);
m_oglWindow2.m_unpTimer = m_oglWindow2.SetTimer(1, 20, 0);
m_oglWindow2.ZoomToFullExtent();
m_oglWindow.ZoomToFullExtent();
want pan, zoomtool and setViewRectangle be active for the bigger window but not for the smaller one:
m_oglWindow.WantToPan = true;
m_oglWindow.WantToUseZoomTool = true;
m_oglWindow.WantToSetViewRectangle = true;
handling the user-defined message in the parent. exchange the ViewRectangle data to the smaller window and draw the red rectangle:
LRESULT CMyOpenGLTestDlg::OnRectangleChanged(WPARAM wParam,LPARAM lParam)
{
m_oglWindow2.RectangleToDraw = m_oglWindow.ViewRectangle;
m_oglWindow2.DrawRectangleOnTopOfTexture();
return 0;
}
Here's the full customized class if you're interested in downloading it and fix my problem.
The problem is that you're drawing on a timer and when your application receives a WM_PAINT message. MFC invokes your OnDraw (...) callback when it needs to repaint the window, you should move ALL of your drawing functionality into OnDraw (...) and call OnDraw (...) from your timer function.
void COpenGLControl::OnTimer(UINT nIDEvent)
{
switch (nIDEvent)
{
case 1:
{
OnDraw (NULL);
break;
}
default:
break;
}
CWnd::OnTimer(nIDEvent);
}
void COpenGLControl::OnDraw(CDC *pDC)
{
// TODO: Camera controls
wglMakeCurrent(hdc,hrc);
// Clear color and depth buffer bits
glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity ();
gluLookAt (0,0,1,0,0,0,0,1,0);
glTranslatef (m_fPosX, m_fPosY, 0.0f);
glScalef (m_fZoom,m_fZoom,1.0);
// Draw OpenGL scene
oglDrawScene();
// Swap buffers
SwapBuffers(hdc);
wglMakeCurrent(NULL, NULL);
}
void COpenGLControl::DrawRectangleOnTopOfTexture()
{
glPushAttrib(GL_ENABLE_BIT|GL_CURRENT_BIT);
glDisable(target);
glColor3f(1.0f,0.0f,0.0f);
glBegin(GL_LINE_LOOP);
glVertex2f(RectangleToDraw.at(0),RectangleToDraw.at(1));
glVertex2f(RectangleToDraw.at(0),RectangleToDraw.at(3));
glVertex2f(RectangleToDraw.at(2),RectangleToDraw.at(3));
glVertex2f(RectangleToDraw.at(2),RectangleToDraw.at(1));
glEnd();
glPopAttrib();
}
And only ever make calls to wglMakeCurrent (...) within OnDraw (...). This function is really meant for situations where you're rendering into multiple render contexts, or drawing using multiple threads.

Jaggy paths when blitting an offscreen CGLayer to the current context

In my current project I need to draw a complex background as a background for a few UITableView cells. Since the code for drawing this background is pretty long and CPU heavy when executed in the cell's drawRect: method, I decided to render it only once to a CGLayer and then blit it to the cell to enhance the overall performance.
The code I'm using to draw the background to a CGLayer:
+ (CGLayerRef)standardCellBackgroundLayer
{
static CGLayerRef standardCellBackgroundLayer;
if(standardCellBackgroundLayer == NULL)
{
CGContextRef viewContext = UIGraphicsGetCurrentContext();
CGRect rect = CGRectMake(0, 0, [UIScreen mainScreen].applicationFrame.size.width, PLACES_DEFAULT_CELL_HEIGHT);
standardCellBackgroundLayer = CGLayerCreateWithContext(viewContext, rect.size, NULL);
CGContextRef context = CGLayerGetContext(standardCellBackgroundLayer);
// Setup the paths
CGRect rectForShadowPadding = CGRectInset(rect, (PLACES_DEFAULT_CELL_SHADOW_SIZE / 2) + PLACES_DEFAULT_CELL_SIDE_PADDING, (PLACES_DEFAULT_CELL_SHADOW_SIZE / 2));
CGMutablePathRef path = createPathForRoundedRect(rectForShadowPadding, LIST_ITEM_CORNER_RADIUS);
// Save the graphics context state
CGContextSaveGState(context);
// Draw shadow
CGContextSetShadowWithColor(context, CGSizeMake(0, 0), PLACES_DEFAULT_CELL_SHADOW_SIZE, [Skin shadowColor]);
CGContextAddPath(context, path);
CGContextSetFillColorWithColor(context, [Skin whiteColor]);
CGContextFillPath(context);
// Clip for gradient
CGContextAddPath(context, path);
CGContextClip(context);
// Draw gradient on clipped path
CGPoint startPoint = rectForShadowPadding.origin;
CGPoint endPoint = CGPointMake(rectForShadowPadding.origin.x, CGRectGetMaxY(rectForShadowPadding));
CGContextDrawLinearGradient(context, [Skin listGradient], startPoint, endPoint, 0);
// Restore the graphics state and release everything
CGContextRestoreGState(context);
CGPathRelease(path);
}
return standardCellBackgroundLayer;
}
The code to blit the layer to the current context:
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawLayerAtPoint(context, CGPointMake(0.0, 0.0), [Skin standardCellBackgroundLayer]);
}
This actually does the trick pretty nice but the only problem I'm having is that the rounded corners (check the static method). Are very jaggy when blitted to the screen. This wasn't the case when the drawing code was at its original position: in the drawRect method.
How do I get back this antialiassing?
For some reason the following methods don't have any impact on the anti-aliassing:
CGContextSetShouldAntialias(context, YES);
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGContextSetAllowsAntialiasing(context, YES);
Thanks in advance!
You can simplify this by just using UIGraphicsBeginImageContextWithOptions and setting the scale to 0.0.
Sorry for awakening an old post, but I came across it so someone else may too. More details can be found in the UIGraphicsBeginImageContextWithOptions documentation:
If you specify a value of 0.0, the scale factor is set to the scale
factor of the device’s main screen.
Basically meaning that if it's a retina display it will create a retina context, that way you can specify 0.0 and treat the coordinates as points.
I'm going to answer my own question since I figured it out some time ago.
You should make a retina aware context. The jaggedness only appeared on a retina device.
To counter this behavior, you should create a retina context with this helper method:
// Begin a graphics context for retina or SD
void RetinaAwareUIGraphicsBeginImageContext(CGSize size)
{
static CGFloat scale = -1.0;
if(scale < 0.0)
{
UIScreen *screen = [UIScreen mainScreen];
if([[[UIDevice currentDevice] systemVersion] floatValue] >= 4.0)
{
scale = [screen scale]; // Retina
}
else
{
scale = 0.0; // SD
}
}
if(scale > 0.0)
{
UIGraphicsBeginImageContextWithOptions(size, NO, scale);
}
else
{
UIGraphicsBeginImageContext(size);
}
}
Then, in your drawing method call the method listed above like so:
+ (CGLayerRef)standardCellBackgroundLayer
{
static CGLayerRef standardCellBackgroundLayer;
if(standardCellBackgroundLayer == NULL)
{
RetinaAwareUIGraphicsBeginImageContext(CGSizeMake(320.0, 480.0));
CGRect rect = CGRectMake(0, 0, [UIScreen mainScreen].applicationFrame.size.width, PLACES_DEFAULT_CELL_HEIGHT);
...

Cocoa OpenGL Texture Creation

I am working on my first OpenGL application using Cocoa (I have used OpenGL ES on the iPhone) and I am having trouble loading a texture from an image file. Here is my texture loading code:
#interface MyOpenGLView : NSOpenGLView
{
GLenum texFormat[ 1 ]; // Format of texture (GL_RGB, GL_RGBA)
NSSize texSize[ 1 ]; // Width and height
GLuint textures[1]; // Storage for one texture
}
- (BOOL) loadBitmap:(NSString *)filename intoIndex:(int)texIndex
{
BOOL success = FALSE;
NSBitmapImageRep *theImage;
int bitsPPixel, bytesPRow;
unsigned char *theImageData;
NSData* imgData = [NSData dataWithContentsOfFile:filename options:NSUncachedRead error:nil];
theImage = [NSBitmapImageRep imageRepWithData:imgData];
if( theImage != nil )
{
bitsPPixel = [theImage bitsPerPixel];
bytesPRow = [theImage bytesPerRow];
if( bitsPPixel == 24 ) // No alpha channel
texFormat[texIndex] = GL_RGB;
else if( bitsPPixel == 32 ) // There is an alpha channel
texFormat[texIndex] = GL_RGBA;
texSize[texIndex].width = [theImage pixelsWide];
texSize[texIndex].height = [theImage pixelsHigh];
if( theImageData != NULL )
{
NSLog(#"Good so far...");
success = TRUE;
// Create the texture
glGenTextures(1, &textures[texIndex]);
NSLog(#"tex: %i", textures[texIndex]);
NSLog(#"%i", glIsTexture(textures[texIndex]));
glPixelStorei(GL_UNPACK_ROW_LENGTH, [theImage pixelsWide]);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
// Typical texture generation using data from the bitmap
glBindTexture(GL_TEXTURE_2D, textures[texIndex]);
NSLog(#"%i", glIsTexture(textures[texIndex]));
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texSize[texIndex].width, texSize[texIndex].height, 0, texFormat[texIndex], GL_UNSIGNED_BYTE, [theImage bitmapData]);
NSLog(#"%i", glIsTexture(textures[texIndex]));
}
}
return success;
}
It seems that the glGenTextures() function is not actually creating a texture because textures[0] remains 0. Also, logging glIsTexture(textures[texIndex]) always returns false.
Any suggestions?
Thanks,
Kyle
glGenTextures(1, &textures[texIndex] );
What is your textures definition?
glIsTexture only returns true if the texture is already ready. A name returned by glGenTextures, but not yet associated with a texture by calling glBindTexture, is not the name of a texture.
Check if the glGenTextures is by accident executed between glBegin and glEnd -- that's the only official failure reason.
Also:
Check if the texture is square and has dimensions that are a power of 2.
Although it isn't emphasized anywhere enough iPhone's OpenGL ES implementation requires them to be that way.
OK, I figured it out. It turns out that I was trying to load the textures before I set up my context. Once I put loading textures at the end of the initialization method, it worked fine.
Thanks for the answers.
Kyle

Resources