I have a UIImage Category for iOS Objective-C which tints an image based on a given UIColor value. You can take a look at the method below:
- (UIImage *)tintImageWithTint:(UIColor *)color withIntensity:(float)alpha {
CGSize size = self.size;
UIGraphicsBeginImageContextWithOptions(size, NO, [UIScreen mainScreen].scale); // MEMORY SPIKE
CGContextRef context = UIGraphicsGetCurrentContext();
[self drawAtPoint:CGPointZero blendMode:kCGBlendModeNormal alpha:1.0]; // MEMORY SPIKE
CGContextSetFillColorWithColor(context, color.CGColor);
CGContextSetBlendMode(context, kCGBlendModeOverlay);
CGContextSetAlpha(context, alpha);
CGContextFillRect(UIGraphicsGetCurrentContext(), CGRectMake(CGPointZero.x, CGPointZero.y, self.size.width, self.size.height));
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(context); // MEMORY SPIKE
UIImage *tintedImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return tintedImage;
}
I've been testing this method with Xcode's Instruments app and found that it takes up more than 96.0 % of all bytes used during the life cycle of the app. That's crazy! It allocates more than 5 MB very quickly, which causes a level one memory warning and begins to terminate other running processes.
Instruments Allocation Analysis
The image that I'm tinting is only a 111 KB, so I can't see how this code could possibly allocate more than 5 MB so quickly.
How can I improve the performance of this code and reduce its memory impact? I'm not very familiar with CoreGraphics, so any help would be appreciated.
2 Answers 2
your image might have 111 KB compressed as an jpg or a png. But for image processing it needs to be decompressed. It will be loaded into memory with 8 or 16 bit for each color channel. With RGB it would be up to 48 bit per pixel and 8 bit for the alpha cannel in case of png. So 5MB would represent roughly 1.25 million pixel, or something like 1000*1250 pixel.
You could use this approach on iOS 7 devices. I can't see which images you're using, so I don't know if this will be an actual improvement for you, but the new UIView methods are supposedly ultra-performant:
- (UIImage *)tintImageWithTint:(UIColor *)color withIntensity:(float)alpha {
CGRect frame = CGRectMake(0, 0, self.size.width, self.size.height);
UIView *view = [[UIView alloc] initWithFrame:frame];
UIImageView *imageView = [[UIImageView alloc] initWithImage:self];
UIView *tintView = [[UIView alloc] initWithFrame:frame];
tintView.backgroundColor = color;
tintView.alpha = alpha;
[view addSubview:imageView];
[view addSubview:tintView];
UIGraphicsBeginImageContextWithOptions(self.size, NO, [UIScreen mainScreen].scale);
[view drawViewHierarchyInRect:view.bounds afterScreenUpdates:YES];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
-
\$\begingroup\$ could you explain your review a little more please? \$\endgroup\$Malachi– Malachi2014年01月06日 01:20:48 +00:00Commented Jan 6, 2014 at 1:20
-
\$\begingroup\$ @Malachi I'd be happy to. Are there specific questions I should answer? \$\endgroup\$Aaron Brager– Aaron Brager2014年01月06日 01:26:23 +00:00Commented Jan 6, 2014 at 1:26
-
\$\begingroup\$ you could explain why these are more performant, or how it works, etc. I don't code in Objective C or do anything for Apple Devices, but I know that we like our reviews to have a little more information in them. \$\endgroup\$Malachi– Malachi2014年01月06日 01:27:53 +00:00Commented Jan 6, 2014 at 1:27