best way of handling varying iDevice widths? - ios8

I'm wondering if anyone can provide some insight as to how to handle varying device sizes when designing in storyboard.
Do you have to check for device frame size before drawing views then?
Thanks.

There are two ways you can go about it. If you insist on using frames, you would want to check the frame size before drawing your views. One way you could go about it is writing a method in your utils file that will check for the device, something like this:
+ (NSString *)getHardwareModel {
AppDelegate_iPhone *appDelegate_iPhone = (AppDelegate_iPhone *) [[UIApplication sharedApplication] delegate];
size_t size;
// get the size of the returned device name
sysctlbyname("hw.machine", NULL, &size, NULL, 0);
// allocate the space to store name
char *machine = (char*)malloc(size);
// get the device name
sysctlbyname("hw.machine", machine, &size, NULL, 0);
// place the name into a NSString
NSString *platform = [NSString stringWithCString:machine encoding: NSUTF8StringEncoding];
free(machine);
appDelegate_iPhone.hardwareModel = platform;
return platform;
}
Once we get back what device we are using, we can then set the frame accordingly. So if I wanted to check for say the iPhone6 device, I would do something like this in my setFrameSize method:
NSString *hardwareVersion = [Utils getHardwareModel];
NSString *target=#"x86_64";
NSString *deviceTarget=#"iPhone7,1";
NSRange range =[hardwareVersion rangeOfString:target ];
NSRange deviceRange =[hardwareVersion rangeOfString:deviceTarget ];
NSLog(#"device=%#",hardwareVersion);
// Setting the frame size for the progress bar
if ([[[UIDevice currentDevice] systemVersion] floatValue] >= 8.0) {
if (range.location!=NSNotFound ||deviceRange.location!=NSNotFound) {
float frameSize = self.view.frame.size.width;
NSLog(#"Frame width===%f", frameSize);
self.view.frame = CGRectMake(0, 0, 480, 85);
}
Another way that would avoid all of this is to use autolayout and set constraints based on the different screen sizes. https://developer.apple.com/library/ios/documentation/UserExperience/Conceptual/AutolayoutPG/Introduction/Introduction.html

Related

Using NSGlyph and memory allocation

in a method to track line breaks frequently, for a NSTextView visibleRect, i am allocating memory for NSGlyph to use NSLayoutManager getGlyphs:range:.
should/can i find out how much memory this should be since i have a reference for the range (without affecting layout), and also, what kind of cleanup should happen -- running with ARC ?
the code (which runs on a main queue) :
NSLayoutManager *lm = [self.textView layoutManager];
NSTextContainer *tc = [self.textView textContainer];
NSRect vRect = [self.textView visibleRect];
NSRange visibleRange = [lm glyphRangeForBoundingRectWithoutAdditionalLayout:vRect inTextContainer:tc];
NSUInteger vRangeLoc = visibleRange.location;
NSUInteger numberOfLines;
NSUInteger index;
NSGlyph glyphArray[5000]; // <--- memory assigned here
NSUInteger numberOfGlyphs = [lm getGlyphs:glyphArray range:visibleRange];
NSRange lineRange;
NSMutableIndexSet *idxset = [NSMutableIndexSet indexSet];
for (numberOfLines = 0, index = 0; index < numberOfGlyphs; numberOfLines++) {
(void)[lm lineFragmentRectForGlyphAtIndex:index effectiveRange:&lineRange withoutAdditionalLayout:YES];
[idxset addIndex:lineRange.location + vRangeLoc];
index = NSMaxRange(lineRange);
}
self.currentLinesIndexSet = idxset;
With the NSGlyph glyphs[5000] notation, you're allocating the memory on the stack. But instead of 5000 glyphs it only has to hold visibleRange.length + 1 glyphs:
glyphArray
On output, the displayable glyphs from glyphRange,
null-terminated. Does not include in the result any NSNullGlyph or
other glyphs that are not shown. The memory passed in should be large
enough for at least glyphRange.length+1 elements.
And because it is on the stack, you don't have to worry about freeing the memory—because never malloced memory; it is freed automatically when leaving the function—even without ARC
So it should work, if you write it like this:
NSLayoutManager *lm = ...
NSRange glyphRange = ...
NSGlyph glyphArray[glyphRange.length + 1];
NSUInteger numberOfGlyphs = [lm getGlyphs:glyphArray range:glyphRange];
// do something with your glyphs

EXC_BAD_ACCESS error in Xcode

I really need you guy HELP , I run my program in Xcode and its successful but later,
Its show me this error: **Thread 1: Program received signal :"EXC_BAD_ACCESS" on my program line that I have **bold below :
- (NSString *) ocrImage: (UIImage *) uiImage
{
CGSize imageSize = [uiImage size];
double bytes_per_line = CGImageGetBytesPerRow([uiImage CGImage]);
double bytes_per_pixel = CGImageGetBitsPerPixel([uiImage CGImage]) / 8.0;
CFDataRef data = CGDataProviderCopyData(CGImageGetDataProvider([uiImage CGImage]));
const UInt8 *imageData = CFDataGetBytePtr(data);
// this could take a while. maybe needs to happen asynchronously.
**char* text = tess->TesseractRect(imageData,(int)bytes_per_pixel,(int)bytes_per_line, 0, 0,(int) imageSize.height,(int) imageSize.width);**
// Do something useful with the text!
NSLog(#"Converted text: %#",[NSString stringWithCString:text encoding:NSUTF8StringEncoding]);
return [NSString stringWithCString:text encoding:NSUTF8StringEncoding];
}
Thank you guy .
make sure that imageData is not NULL here. That's the most common cause of what you're seeing. You should reconsider your title to something more related to your problem, and focus on the stacktrace and all the variables you are passing to TesseractRect().
The other major likelihood is that tess (whatever that is) is a bad pointer, or that is not part of the correct C++ class (I assume this is Objective-C++; you're not clear on any of that).
- (NSString *)readAndProcessImage:(UIImage *)uiImage
{
CGSize imageSize = [uiImage size];
int bytes_per_line = (int)CGImageGetBytesPerRow([uiImage CGImage]);
int bytes_per_pixel = (int)CGImageGetBitsPerPixel([uiImage CGImage]) / 8.0;
CFDataRef data =
CGDataProviderCopyData(CGImageGetDataProvider([uiImage CGImage]));
const UInt8 *imageData = CFDataGetBytePtr(data);
// this could take a while. maybe needs to happen asynchronously?
char *text = tess.TesseractRect(imageData, bytes_per_pixel, bytes_per_line, 0,
0, imageSize.width, imageSize.height);
NSString *textStr = [NSString stringWithUTF8String:text];
delete[] text;
CFRelease(data);
return textStr;
}

Core Graphics & GIF Color Table

I am trying to limit the number of colors of an animated GIF (created from an array of CGImageRef).
However, I am have difficulty actually setting the custom color table. Does anyone know how to do this with Core Graphics?
I know of kCGImagePropertyGIFImageColorMap. Below is some test code (borrowing heavily from this github gist -- since it's the only instance of kCGImagePropertyGIFImageColorMap Google could find).
NSString *path = [#"~/Desktop/test.png" stringByExpandingTildeInPath];
CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData((CFDataRef)[NSData dataWithContentsOfFile:path]);
CGImageRef image = CGImageCreateWithPNGDataProvider(imgDataProvider, NULL, true, kCGRenderingIntentDefault);
const uint8_t colorTable[ 8 ] = { 0, 0, 0, 0, 255, 255, 255 , 255};
NSData* colorTableData = [ NSData dataWithBytes: colorTable length: 8 ];
NSMutableDictionary *gifProps = [NSMutableDictionary dictionary];
[gifProps setObject:colorTableData forKey:(NSString *)kCGImagePropertyGIFImageColorMap];
[gifProps setObject:[NSNumber numberWithBool:NO] forKey:(NSString *)kCGImagePropertyGIFHasGlobalColorMap];
NSDictionary* imgProps = [ NSDictionary dictionaryWithObject: gifProps
forKey: (NSString*) kCGImagePropertyGIFDictionary ];
NSURL* destUrl = [NSURL fileURLWithPath:[#"~/Desktop/test.gif" stringByExpandingTildeInPath]];
CGImageDestinationRef dst = CGImageDestinationCreateWithURL( (CFURLRef) destUrl, kUTTypeGIF, 1, NULL );
CGImageDestinationAddImage( dst, image, imgProps );
CGImageDestinationSetProperties(dst, (CFDictionaryRef) imgProps);
CGImageDestinationFinalize( dst );
CFRelease( dst );
This, however, does not produce a black & white image.
Furthermore, I've tried opening a GIF to find the color table information, but that's providing little help.
CGImageSourceRef imageSourceRef = CGImageSourceCreateWithURL((CFURLRef) destUrl, NULL);
NSDictionary *dict = (NSDictionary *)CGImageSourceCopyPropertiesAtIndex(imageSourceRef,0, NULL);
CGImageRef img = CGImageSourceCreateImageAtIndex(imageSourceRef, 0, NULL);
printf("Color space model: %d, indexed=%d, rgb = %d\n", CGColorSpaceGetModel(CGImageGetColorSpace(img)), kCGColorSpaceModelIndexed,kCGColorSpaceModelRGB);
NSLog(#"%#", dict);
It says the color space is RGB for GIFs. Yet, if I try that code with an indexed PNG, it says the color space is indexed.
Furthermore, all the GIFs I've tried have a image dictionary that looks roughly like the following:
{
ColorModel = RGB;
Depth = 8;
HasAlpha = 1;
PixelHeight = 176;
PixelWidth = 314;
"{GIF}" = {
DelayTime = "0.1";
UnclampedDelayTime = "0.04";
};
}
(If I use CGImageSourceCopyProperties(...), it mentions a global color map, but again no color table is provided.)
I didn't run your code, but from reading it I saw one mistake: The color space in gif is rgb, so your color map should have numcolors*3 items (in stead of *4). IOS doesn't deal gracefully with color maps whose size is not a multiple of 3...
In other words:
const uint8_t colorTable[ 6 ] = { 0, 0, 0, 255, 255 , 255};
NSData* colorTableData = [ NSData dataWithBytes: colorTable length:6];
should do the job.

Trying to get bytes and append using NSMutableData for a video through Asset Library gives memory full error

I’m trying to upload a video of size 100MB through Asset Library. But when i try to use -(NSUInteger)getBytes:(uint8_t *)buffer fromOffset:(long long)offset length:(NSUInteger)length error:(NSError **)error of ALAssetRepresentation I get memory full error. I also need to put the data in buffer to NSData. How can i achieve that?
I tried this way:
Byte *buffer = (Byte*)malloc(asset.defaultRepresentation.size);
NSUInteger k = [asset.defaultRepresentation getBytes:buffer fromOffset: 0.0
length:asset.defaultRepresentation.size error:nil];
NSData *adata = NSData *adata = [NSData dataWithBytesNoCopy:buffer
length:j freeWhenDone:YES];
It really works!
As #runeb said the answer is not working properly with large files. You should do something like that:
int bufferSize = 2048;
int offset = 0;
NSString* name=nil;
while(offset<asset.size){
Byte *buffer = (Byte*)malloc(bufferSize);
NSUInteger buffered = [asset getBytes:buffer fromOffset:offset length:bufferSize error:nil];
NSData *data;
data = [NSData dataWithBytesNoCopy:buffer length:buffered freeWhenDone:NO];
if(!name){
//Creates the file and gives it a unique name
name = [FileUtils saveVideoFromAsset:data];
}
else{
//Append data to the file created...
[FileUtils appendData:data toFile:name];
}
offset+=buffered;
free(buffer);
}
In order to append data to a file you can use that:
NSFileHandle *myHandle = [NSFileHandle fileHandleForWritingAtPath:filePath];
[myHandle seekToEndOfFile];
[myHandle writeData:videoData];
I hope that helps!
ust add #autoreleasepool block, so that any autorleased objects should be cleaned up. it looks like that ARC has something changed after iOS7
#autoreleasepool {
NSUInteger readStatus = [rep getBytes:buffer fromOffset:_startFromByte length:chunkSize error:NULL];
}

CGDataProvider doesn't free up data on callback

I am creating a very big buffer (called buffer2 in the code) using CGDataProviderRef with the following code:
-(UIImage *) glToUIImage {
NSInteger myDataLength = 768 * 1024 * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, 768, 1024, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y <1024; y++)
{
for(int x = 0; x <768 * 4; x++)
{
buffer2[(1023 - y) * 768 * 4 + x] = buffer[y * 4 * 768 + x];
}
}
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, &releaseBufferData);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * 768;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGImageRef imageRef = CGImageCreate(768, 1024, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
// then make the uiimage from that
UIImage *myImage = [UIImage imageWithCGImage:imageRef];
free(buffer);
//[provider autorelease];
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(imageRef);
return myImage;
}
I expect CGProvider to call back the releaseBufferData method when it is done with buffer2 so that I can free up the memory it's taken. The code for this method is:
static void releaseBufferData (void *info, const void *data, size_t size){
free(data);
}
However, even though my callback method is called, the memory that data (buffer2) takes is never freed and hence it results in massive memory leaks. What am I doing wrong?
Have you ever CGDataProviderRelease your provider? The callback will not be called if you don't release the data provider.
For some peculiar reason this is not an issue anymore.
Just in case this helps someone else. I was having the same problem. It started working once I called
CGImageRelease(imageRef);
right before the
CGDataProviderRelease(provider);
malloc isn't freed in a "release" callback when it allocates on one thread but the callback that deallocates it is executed on another. Wrap both your allocation and deallocation in this:
dispatch_async(dispatch_get_main_queue(), ^{
// *malloc* and *free* go here; don't call &releaseCallBack or some such anywhere
});
A second thing to try is a completion block. Instead of returning an image in the traditional way (via a method return property), use a completion block. The UIImage will be freed as soon as the completion block is closed.
For example, if you're trying to save multiple images to the Photos library, but the malloc'd data isn't freeing after each image is created, then pass the image back via a completion block, making sure you create no new instance of the image that is passed back, and it will be gone as soon as it hits the };
A third thing is calloc instead of malloc:
GLubyte *buffer = (GLubyte *)calloc(myDataLength, sizeof(GLubyte));
That's what I use now where I once had malloc, which obviates the need for the prior two suggestions. I use OpenGL to populate a collection view consisting of a single row of cells, each with one frame from a video. To skim the video, you slide the collection view, if you see a frame you want to save as an image, you long press it; if you want to advance to that frame in the video, you tap it. As you know, even short videos have a lot of frames; the calloc solution knocks about 256 MB off total memory usage every call to the release callback, to which it builds when you scroll blurry fast.

Resources