Why doesn't my UIView update and show intermediate results? - performance

In my app I want to stack a large number of (identical PNG) files on top of each other and show the user the image-result as well as the progress-bar. In order to improve performance I came up with the following code, but only when the for-loop finishes I'll see the end-result.
For large sets of images this is not what I want, and I really want to see intermediate results. The following code probably contains errors, which I can't find yet:
- (IBAction)doDrawLayers:(NSArray *)drawImages
{
// Use dispath-queues for drawing intermediate result and progress.
_startDrawing = [NSDate date];
dispatch_queue_t calcImage = dispatch_queue_create("calcImage", NULL);
dispatch_async(calcImage, ^{
_sldProgress.maximumValue = drawImages.count;
_sldProgress.minimumValue = 0;
_imageNr = 0;
[_sldProgress setMaximumValue:drawImages.count];
for (NSString *imageFile in drawImages) {
#autoreleasepool {
// Update in main queue.
dispatch_async(dispatch_get_main_queue(), ^{
UIGraphicsBeginImageContext(_imageSize);
// Show progress update.
_lblProgress.text = [NSString stringWithFormat:#"%d %.2f sec", ++_imageNr, -[_startDrawing timeIntervalSinceNow]];
_sldProgress.value = _imageNr;
NSString *filePath = [[NSBundle mainBundle] pathForResource:imageFile ofType:nil];
[[UIImage imageWithContentsOfFile:filePath] drawAtPoint:CGPointMake(0., 0.)];
// Get CGImage from the offscreen image context.
_imageView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
});
}
}
});
// Finished calculating image, release dispatch.
dispatch_release(calcImage);
}
I'm also struggling with the question of whats the best place to put the UIGraphicsBeginImageContext/UIGraphicsEndImageContext-pair, since I want to minimize the amount of memory being used and maximize the overall performance.

Finally found a solution by merging code from Apple's PhotoScroller (mainly based on CATiledLayer) and LargeImageDownsizing example code.

Related

OS X, Cocoa: How to highlight burned-out areas of a photo?

I've got an app that displays photos using NSImage – specifically, -[NSImage drawInRect:fromRect:operation:fraction:]. I want to highlight areas of the photo that are completely burned out (maximum values in all components, pure white) using a color like red, as some digital cameras and image processing apps do, to help the user see whether the image is overexposed, and how badly.
I've been scratching my head as to how to do this. Options I've considered:
I could probably write a Core Image filter to do it; none of the built-in filters look up to the task. That seems like overkill, though; I've been reading through the docs, and it looks fairly complicated. Big learning curve.
I could scan through the bitmap data for the image and modify it as necessary. This is easy enough to code for one bitmap format, but the multitude of bitmap formats make it a rather annoying exercise, and speed is important here, so writing general-purpose code that renders the image up to some maximal common format and works on that bitmap would be too big a speed penalty.
As it happens, I am already scanning through images (handling all the different bitmap formats) at an earlier point in the code, to generate histogram data for the images. I could pretty easily add code at that point that would remember the burned-out pixels for later use. I'm not quite sure what the best way is to do that, though. A 1-bit-per-pixel NSBitmapImageRep? How would I draw it later, making the 1-pixels draw red and the 0-pixels draw transparent, for example? I don't want to make a 32-bit NSBitmapImageRep with an alpha channel and everything just for this purpose, as memory is not infinite and images are large. But there must be a way to draw a 1-bit mask in a given color, somehow.
Before forging ahead with one of these approaches, I thought I'd see whether anybody here has a better idea. Or maybe has implemented the CI filter in question already? Apart from the learning curve, that seems like the best approach I've thought of so far – no memory overhead, and probably faster than other options, too.
Thanks...
Ben Haller
Stick Software
OK, I implemented my own Core Image filter to do this. Wasn't as hard as I expected, although the documentation is not great for this stuff. The doc examples all assume you're using ARC, so if you're not, following those examples will give you various retain/release bugs. There was also a little weirdness with the CIFilterConstructor stuff, which did not quite go as documented. But overall pretty easy. CI is cool. My code is below, for anybody who might find it useful:
Header:
#import
#interface SSTintHighlightsFilter : CIFilter
{
CIImage *inputImage;
CIColor *highlightColor;
}
#end
Implementation file:
#import "SSTintHighlightsFilter.h"
static CIKernel *tintHighlightsFilter = nil;
#implementation SSTintHighlightsFilter
+ (void)initialize
{
[CIFilter registerFilterName:#"SSTintHighlightsFilter" constructor:(id )self
classAttributes:[NSDictionary dictionaryWithObjectsAndKeys:#"Tint Highlights", kCIAttributeFilterDisplayName, [NSArray arrayWithObjects:kCICategoryColorAdjustment, kCICategoryStillImage, nil], kCIAttributeFilterCategories, nil]];
}
+ (CIFilter *)filterWithName:(NSString *)name
{
CIFilter *filter = [[self alloc] init];
return [filter autorelease];
}
- (id)init
{
if (!tintHighlightsFilter)
{
NSBundle *bundle = [NSBundle bundleForClass:[self class]];
NSString *code = [NSString stringWithContentsOfFile:[bundle pathForResource:#"tintHighlightsAndShadows" ofType:#"cikernel"] encoding:NSASCIIStringEncoding error:NULL];
NSArray *kernels = [CIKernel kernelsWithString:code];
tintHighlightsFilter = [[kernels objectAtIndex:0] retain];
}
return [super init];
}
- (NSDictionary *)customAttributes
{
NSDictionary *attrs = #{
#"highlightColor" : #{ kCIAttributeClass : [CIColor class], kCIAttributeType : kCIAttributeTypeOpaqueColor }
};
return attrs;
}
- (CIImage *)outputImage
{
CISampler *src = [CISampler samplerWithImage:inputImage];
return [self apply:tintHighlightsFilter
arguments:[NSArray arrayWithObjects:src, highlightColor, nil]
options:[NSDictionary dictionaryWithObjectsAndKeys:[src definition], kCIApplyOptionDefinition, nil]];
}
#end
tintHighlights.cikernel:
kernel vec4 tintHighlights(sampler inputImage, __color highlightColor)
{
vec4 originalColor, tintedColor;
float sum;
// fetch the source pixel
originalColor = sample(inputImage, samplerCoord(inputImage));
// calculate the color component sum as a way of testing whether we are black or white
sum = originalColor.r + originalColor.g + originalColor.b;
// replace pixels that are white with the highlight color
tintedColor = (sum > 2.99999999999999999999999) ? highlightColor : originalColor;
// preserve alpha
tintedColor.a = originalColor.a;
return tintedColor;
}
using the filter:
+ (NSImage *)showHighlightsInImage:(NSImage *)img dstRect:(NSRect)dstRect
{
NSGraphicsContext *currentContext = [NSGraphicsContext currentContext];
NSRect dstRectForCGImage = dstRect; // because the method below wants a pointer, and I don't trust it not to modify my rect...
CGImageRef cgImage = [img CGImageForProposedRect:&dstRectForCGImage context:currentContext hints:nil];
CIImage *inputImage = [[CIImage alloc] initWithCGImage:cgImage];
[SSTintHighlightsFilter class]; // get my filter initialized
CIFilter *highlightFilter = [CIFilter filterWithName:#"SSTintHighlightsFilter"];
[highlightFilter setValue:inputImage forKey:#"inputImage"];
[highlightFilter setValue:[CIColor colorWithRed:1.0 green:0.0 blue:0.0] forKey: #"highlightColor"];
[inputImage release];
CIImage *outputImage = [highlightFilter valueForKey:#"outputImage"];
NSImage *resultImage = [[NSImage alloc] initWithSize:[img size]];
[resultImage addRepresentation:[NSCIImageRep imageRepWithCIImage:outputImage]];
return [resultImage autorelease];
}
I'm not sure that I'm handling the alpha entirely robustly, with premultiplication issues and so forth, but apart from that possible glitch it is working great.

How update image from URL in OS X app?

i have some problem. So i have code which update song name and picture from php. Song name work and also updated but picture not work, in php file all work but in my project - no. How make update picture from url after 10 sec for example. Thanks.
-(void)viewWillDraw {
NSURL *artistImageURL = [NSURL URLWithString:#"http://site.ru/ParseDataField/kiss.php?image"];
NSImage *artistImage = [[NSImage alloc] initWithContentsOfURL:artistImageURL];
[dj setImage:artistImage];
dispatch_queue_t queue = dispatch_get_global_queue(0,0);
dispatch_async(queue, ^{
NSError* error = nil;
NSString* text = [NSString stringWithContentsOfURL:[NSURL URLWithString:#"http://site.ru/ParseDataField/kiss.php?artist"]
encoding:NSASCIIStringEncoding
error:&error];
dispatch_async(dispatch_get_main_queue(), ^{
[labelName setStringValue:text];
});
});
}
You should really consider placing this code someplace other than -viewWillDraw. This routine can be called multiple times for the same NSView under some circumstances and, more importantly, you need to call [super viewWillDraw] to make sure that things will actually draw correctly (if anything is drawn in the view itself).
For periodic updates (such as every 10 seconds), you should consider using NSTimer to trigger the retrieval of the next object.
As for the general question of why your image isn't being drawn correctly, you should probably consider putting the image retrieval and drawing code into the same structure as your label retrieval and drawing code. This will get the [dj setImage: artistImage] method call outside of the viewWillDraw chain which is likely causing some difficulty here.

XCODE - How to update array?

I'm very new to Xcode/programming and trying to modify existing code
I'm having a small problem where I have an amount of objects (enemies) on the screen at one particular time and cannot redefine their value. I set my enemies to begin with 3 on the screen.
My objective is to change the amount of enemies based on the current score.
I've attached snippets of the code below.
int numberOfEnemies;
if (self.score>=0) {
numberOfEnemies = 3
}
else if (self.score>=100) {
numberOfEnemies = 4
}
// Setup array
enemyArray = [[NSMutableArray alloc] init];
for(int i = 0; i < numberOfEnemies; i++) {
[enemyArray addObject:[SpriteHelpers setupAnimatedSprite:self.view numFrames:3
withFilePrefix:#"enemyicon" withDuration:((CGFloat)(arc4random()%2)/3 + 0.5)
ofType:#"png" withValue:0]];
}
enemyView = [enemyArray objectAtIndex:0];
What do I need to do to parse the new value of numberOfEnemies into the array when my score updates?
I'm going to move our conversation into an answer since I don't want it to get too long winded, and I can easily edit and expand on this.
So far, we've established that the reason that you're having issues is that you execute the above code in the viewDidLoad function, which will run at least once when the application is first started. The problem with this is as you've found out, that you arent getting a chance to see a new score, and then update the number of enemies.
I know that game update loops for iOS are usually done in the following structure, but I would recommend finding a tutorial online to get what may be a more efficient/correct way to do it.
From your current structure, I would take the code you have above and create a new function out of it:
-(void) updateDifficulty:(NSTimer *)gameTimer
{
//This can be the code you have above for now
}
Afterwards, inside of your viewDidLoad, I would put the following code:
-(void) viewDidLoad:
{
gameTimer = [NSTimer timerWithTimeInterval:1.0 target:self selector:#selector(updateDifficulty:) userInfo:nil repeats:YES];
[[NSRunLoop mainRunLoop] addTimer:gameTimer forMode:NSDefaultRunLoopMode];
}
What that does is it declares a timer that will keep track of the game time, and with how it was declared, every 1 second it will call the updateDifficulty method. This is the general structure that you want, but again I would highly suggest you check out a game tutorial from Ray Wenderlich for example.
Hope that helps!

Combine more than one image to get a single image

I've to capture the contents of a UIView. There are plenty of methods to capture contents.The problem is,, if the size of the view becomes too big, the app crashes as it takes huge amount of memory.So is there any possibility to merge the raw image data (performing byte-by-byte operation) in a single file and make image from that??
`enter code here`Merging two images to create one single image file
Suppose we have two images with name file1.png and file2.png The code below merges two images.
#implementation LandscapeImage
- (void) createLandscapeImages
{
CGSize offScreenSize = CGSizeMake(2048, 1380);
UIGraphicsBeginImageContext(offScreenSize);
//Fetch First image and draw in rect
UIImage* imageLeft = [UIImage imageNamed:#"file1.png"];
CGRect rect = CGRectMake(0, 0, 1024, 1380);
[imageLeft drawInRect:rect];
[imageLeft release];
//Fetch second image and draw in rect
UIImage* imageRight = [UIImage imageNamed:#"file2.jpg"];
rect.origin.x += 1024;
[imageRight drawInRect:rect];
[imageRight release];
UIImage* imagez = UIGraphicsGetImageFromCurrentImageContext();// it will returns an image based on the contents of the current bitmap-based graphics context.
if (imageLeft && imageRight)
{
//Write code for save imagez in local cache
}
UIGraphicsEndImageContext();
}
#end

imageWithCGImage not being released or is trapped by Cache similar to imageNamed, any work around for generating dynamic images?

I'm generating UIImages with a bit-bucket, creating them on the fly and swapping the UIImageView's image. Is there a way to edit the UIImageView's Image directly? (ie. change the color of a specific pixel, without removing the UIImage from the UIImageView, and get it to redraw.)
Currently, I'm flushing the UIImage and using imageWithCGIImage to make a new one, and assigning it to the UIImageView. This works. Shows no MemLeaks. But on the iPhone (3Gs) after about 100 image replacements, CRASHES. Cache'n issue? The memory summation seems to be hitting the phone's limit if cache not releasing, however, Simulator does not show memory consumption with each image swap. Stays flatlined without leaks.
Note: topologyImage array is the RGBA pixel-bucket. The REF variables are not released. Every attempt to do so, crashes next call. Without, Instruments reports no leaks.
=========
CGColorSpaceRef colorSpaceRef=CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaLast;
CGColorRenderingIntent renderingIntent=kCGRenderingIntentDefault;
CGDataProviderRef provider=CGDataProviderCreateWithData(NULL,topologyImage,(I*I*4),NULL);
CGImageRef imageRef=CGImageCreate(I,I,8,4*8,4*I,colorSpaceRef,bitmapInfo,provider,NULL,false,renderingIntent);
UIImage *img=[UIImage imageWithCGImage:imageRef];
if( IMG[NDXtopo].vw ) {
[IMG[NDXtopo].vw setImage:img];
}
else {
IMG[NDXtopo].vw=[[UIImageView alloc] initWithImage:img];
[master.view addSubview:IMG[NDXtopo].vw];
}
Basically you should release your references, especially the CGImageRef since the imageWithCGImage doesn't take ownership of the CGImage but rather seems to copy the data internally.
The docs on this are quite unclear, but from what I have found in my testing if I don't release CGImageRefs and CGDataProviderRefs it will eventually cause the application to get memory warnings... and then crash.
Not sure why you would have a crash, but in doing a quick test with:
UIImageView *view = [[UIImageView alloc] init];
int I = 128;
unsigned char *topologyImage = malloc(I*I*4*sizeof(unsigned char));
for(int i=0; i<I*I*4; i++)
{
topologyImage[i] = 100;
}
for(int i=0; i<1000; i++)
{
CGColorSpaceRef colorSpaceRef=CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaLast;
CGColorRenderingIntent renderingIntent=kCGRenderingIntentDefault;
CGDataProviderRef provider=CGDataProviderCreateWithData(NULL,topologyImage,(I*I*4),NULL);
CGImageRef imageRef=CGImageCreate(I,I,8,4*8,4*I,colorSpaceRef,bitmapInfo,provider,NULL,false,renderingIntent);
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
UIImage *img=[UIImage imageWithCGImage:imageRef];
view.image = img;
CGImageRelease(imageRef);
}
free(topologyimage);
Seems to work just fine for me, so whatever is causing your crash seems to be because of something outside of your example, like for example how you got the image data into the topologyImage

Resources