My code is working perfectly fine, except for one-thing. The performance is not up-to the mark. What I am trying to achieve is I have an image with a few circles (each circle has a border with different color than circle fill color). When the user touches any circle, I want to change the border color of the selected circle. That's working fine, but what isn't working up-to expectation is that we have a table in the UI from which the user can select multiple circles, like 6-12 circles at max. In this 2nd scenario, the implementation is taking 3-4 seconds. Below I am sharing the code snippet I am using, if anything wrong found please guide me.
NOTE: I have 2 Images: Front and Back, front image is displayed to the user and upon user's interaction with the front image, the color of the touched point is taken from the back image and if that touched point is any of the circle on the image, then the replaceColor
method gets called.
func replaceColor(sourceColor: [UIColor], withDestColor destColor: UIColor, tolerance: CGFloat) -> UIImage
{
// This function expects to get source color(color which is supposed to be replaced)
// and target color in RGBA color space, hence we expect to get 4 color components: r, g, b, a
// assert(sourceColor.cgColor.numberOfComponents == 4 && destColor.cgColor.numberOfComponents == 4,
// "Must be RGBA colorspace")
// *** Allocate bitmap in memory with the same width and size as destination image or back image *** //
let backImageBitmap = self.backImage!.cgImage! // Back Image Bitmap
let bitmapByteCountBackImage = backImageBitmap.bytesPerRow * backImageBitmap.height
let rawDataBackImage = UnsafeMutablePointer<UInt8>.allocate(capacity: bitmapByteCountBackImage) // A pointer to the memory block where the drawing is to be rendered
/// *** A graphics context contains drawing parameters and all device-specific information needed to render the paint on a page to a bitmap image *** //
let contextBackImage = CGContext(data: rawDataBackImage,
width: backImageBitmap.width,
height: backImageBitmap.height,
bitsPerComponent: backImageBitmap.bitsPerComponent,
bytesPerRow: backImageBitmap.bytesPerRow,
space: backImageBitmap.colorSpace ?? CGColorSpaceCreateDeviceRGB(),
bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Big.rawValue)
// Draw bitmap on created context
contextBackImage!.draw(backImageBitmap, in: CGRect(x: 0, y: 0, width: backImageBitmap.width, height: backImageBitmap.height))
// Allocate bitmap in memory with the same width and size as source image or front image
let frontImageBitmap = self.frontImage!.cgImage! // Front Image Bitmap
let bitmapByteCountFrontImage = frontImageBitmap.bytesPerRow * frontImageBitmap.height
let rawDataFrontImage = UnsafeMutablePointer<UInt8>.allocate(capacity: bitmapByteCountFrontImage) // A pointer to the memory block where the drawing is to be rendered
let contextFrontImage = CGContext(data: rawDataFrontImage,
width: frontImageBitmap.width,
height: frontImageBitmap.height,
bitsPerComponent: frontImageBitmap.bitsPerComponent,
bytesPerRow: frontImageBitmap.bytesPerRow,
space: frontImageBitmap.colorSpace ?? CGColorSpaceCreateDeviceRGB(),
bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Big.rawValue)
// Draw bitmap on created context
contextFrontImage!.draw(frontImageBitmap, in: CGRect(x: 0, y: 0, width: frontImageBitmap.width, height: frontImageBitmap.height))
// *** Get color components from replacement color *** \\
let destinationColorComponents = destColor.cgColor.components
let r2 = UInt8(destinationColorComponents![0] * 255)
let g2 = UInt8(destinationColorComponents![1] * 255)
let b2 = UInt8(destinationColorComponents![2] * 255)
let a2 = UInt8(destinationColorComponents![3] * 255)
// Prepare to iterate over image pixels
var byteIndex = 0
while byteIndex < bitmapByteCountBackImage
{
// Get color of current pixel
let red = CGFloat(rawDataBackImage[byteIndex + 0]) / 255
let green = CGFloat(rawDataBackImage[byteIndex + 1]) / 255
let blue = CGFloat(rawDataBackImage[byteIndex + 2]) / 255
let alpha = CGFloat(rawDataBackImage[byteIndex + 3]) / 255
let currentColorBackImage = UIColor(red: red, green: green, blue: blue, alpha: alpha);
// Compare two colors using given tolerance value
if sourceColor.contains(currentColorBackImage)
{
// If the're 'similar', then replace pixel color with given target color
rawDataFrontImage[byteIndex + 0] = r2
rawDataFrontImage[byteIndex + 1] = g2
rawDataFrontImage[byteIndex + 2] = b2
rawDataFrontImage[byteIndex + 3] = a2
}
byteIndex = byteIndex + 4;
}
// Retrieve image from memory context
let imgref = contextFrontImage!.makeImage()
let result = UIImage(cgImage: imgref!)
// Clean up a bit
rawDataBackImage.deallocate()
rawDataFrontImage.deallocate()
return result
}
1 Answer 1
There are two issues.
The process of converting to and from
UIColor
objects is very expensive.The process of calling
contains
on anArray
is O(n).
The time profiler will show you this:
FWIW, I used a points of interest OSLog
:
import os.log
private let log = OSLog(subsystem: Bundle.main.bundleIdentifier!, category: .pointsOfInterest)
And logged the range:
@IBAction func didTapProcessButton(_ sender: Any) {
os_signpost(.begin, log: log, name: #function)
let final = replaceColor(frontImage: circleImage, backImage: squareImage, sourceColor: [.blue], withDestColor: .green, tolerance: 0.25)
os_signpost(.end, log: log, name: #function)
processedImageView.image = final
}
Then I could easily zoom into just that interval in my code using the "Points of Interest" tool. And having done that, I can switch to the "Time Profiler" tool, and it shows that 49.3% of the time was spent in contains
and 24.9% of the time was spent in UIColor
initializer.
I can also double click on the the replaceColor
function in the above call tree, it will show this to me in my code (for debug builds, at least). This is another way of seeing the same information shown above:
So, regarding UIColor
issue, in Change color of certain pixels in a UIImage, I explicitly use UInt32
representation of the color (and have a struct
to provide user-friendly interaction with this 32-bit integer). I do this to enjoy efficient integer processing, and avoiding UIColor
. In that case, processing colors in a ×ばつ1080 px image takes 0.03 seconds (avoiding the UIColor
to-and-fro for each pixel).
The bigger issue is that contains
is very inefficient. If you must use contains
sort of logic (once you are using UInt32
representations of your colors), I would suggest using a Set
, with O(1) performance, rather than Array
(with O(n) performance).
But even with that contains
is inefficient approach. (In my example, my array had only one item.) I see an unused tolerance
parameter, and wonder if you might consider doing this mathematically rather than looking up colors in some collection.
Unrelated to your performance issue, you have a very serious memory problem here. The provided code snippet is calculating bitmapByteCountFrontImage
and bitmapByteCountBackImage
as width ×ばつ height, but it should be width ×ばつ height ×ばつ 4. Your context uses 4 bytes per pixel, so make sure you allocate your buffer accordingly.
Personally, I get out of the business of manually allocating buffers, and let CGContext
do that for me:
let bitmapInfo: UInt32 = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Big.rawValue
let contextFrontImage = CGContext(data: nil,
width: frontImageBitmap.width,
height: frontImageBitmap.height,
bitsPerComponent: 8, // eight bits per component, not source image bits per component
bytesPerRow: frontImageBitmap.width * 4, // explicitly four bytes per pixel, not source image bytes per row
space: CGColorSpaceCreateDeviceRGB(), // explicit color space, not source image color space
bitmapInfo: bitmapInfo)
guard let dataFrontImage = contextFrontImage?.data else {
print("unable to get front image buffer")
return nil
}
let rawDataFrontImage = dataFrontImage.assumingMemoryBound(to: UInt8.self)
And, when you do this, it frees you from manually deallocating later (which, when you start passing around images, gets very complicated very quickly).
I would also advise against referencing the source image’s bytes per row, bits per component, and color space. The whole purpose of creating and drawing in this new context and grabbing its buffer, is to get a new, known, and predetermined format, not relying on the original image parameters. This obviates the assert
logic that you have commented out at the start of your routine, as we no longer care about the format of the original images.
For example, here is a rendition that uses 32 bit integers for colors and replaces the contains
logic with an arithmetic calculation and it processes a ×ばつ1080 px image on simulator in a release build in 0.01 seconds:
/// Replace colors in "front" image on the basis of colors within in the "back" image matching a requested color within a certain tolerance.
///
/// - Parameters:
/// - frontImage: This is the image that will be used for the basis of the returned image (i.e. where `searchColor` was not found in `backImage`.
/// - backImage: The image in which we're going to look for `searchColor`.
/// - searchColor: The color we are looking for in the backImage.
/// - replacementColor: The color we are going to replace it with if found within the specified `tolerance`.
/// - tolerance: The tolerance (in `UInt8`) to used when looking for `searchColor`. E.g. a `tolerance` of 5 when
/// - Returns: The resulting image.
func replaceColor(frontImage: UIImage, backImage: UIImage, searchColor: UIColor, replacementColor: UIColor, tolerance: UInt8) -> UIImage? {
guard
let backImageBitmap = backImage.cgImage,
let frontImageBitmap = frontImage.cgImage
else {
print("replaceColor: Unable to get cgImage")
return nil
}
let searchColorRGB = RGBA32(color: searchColor)
let (searchColorMin, searchColorMax) = searchColorRGB.colors(tolerance: tolerance)
let replacementColorRGB = RGBA32(color: replacementColor)
/// Graphics context parameters
let bitsPerComponent = 8
let colorspace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = RGBA32.bitmapInfo
let width = backImageBitmap.width
let height = backImageBitmap.height
// back image
let backImageBytesPerRow = width * 4
let backImagePixelCount = width * height
let contextBackImage = CGContext(data: nil,
width: width,
height: height,
bitsPerComponent: bitsPerComponent,
bytesPerRow: backImageBytesPerRow,
space: colorspace,
bitmapInfo: bitmapInfo)
guard let dataBackImage = contextBackImage?.data else {
print("replaceColor: Unable to get back image buffer")
return nil
}
let bufferBackImage = dataBackImage.bindMemory(to: RGBA32.self, capacity: width * height)
contextBackImage!.draw(backImageBitmap, in: CGRect(x: 0, y: 0, width: width, height: height))
// front image
let contextFrontImage = CGContext(data: nil,
width: width,
height: height,
bitsPerComponent: bitsPerComponent,
bytesPerRow: width * 4,
space: colorspace,
bitmapInfo: bitmapInfo)
guard let dataFrontImage = contextFrontImage?.data else {
print("replaceColor: Unable to get front image buffer")
return nil
}
let bufferFrontImage = dataFrontImage.bindMemory(to: RGBA32.self, capacity: width * height)
contextFrontImage!.draw(frontImageBitmap, in: CGRect(x: 0, y: 0, width: width, height: height))
// Prepare to iterate over image pixels
for offset in 0..<backImagePixelCount {
let color = bufferBackImage[offset]
// Compare two colors using given tolerance value
if color.between(searchColorMin, searchColorMax) {
bufferFrontImage[offset] = replacementColorRGB
}
}
// Retrieve image from memory context
return contextFrontImage?.makeImage().flatMap {
UIImage(cgImage: 0ドル)
}
}
Where
struct RGBA32: Equatable {
private var color: UInt32
var redComponent: UInt8 {
return UInt8((color >> 24) & 255)
}
var greenComponent: UInt8 {
return UInt8((color >> 16) & 255)
}
var blueComponent: UInt8 {
return UInt8((color >> 8) & 255)
}
var alphaComponent: UInt8 {
return UInt8((color >> 0) & 255)
}
init(red: UInt8, green: UInt8, blue: UInt8, alpha: UInt8) {
let red = UInt32(red)
let green = UInt32(green)
let blue = UInt32(blue)
let alpha = UInt32(alpha)
color = (red << 24) | (green << 16) | (blue << 8) | (alpha << 0)
}
init(color: UIColor) {
var red: CGFloat = .zero
var green: CGFloat = .zero
var blue: CGFloat = .zero
var alpha: CGFloat = .zero
color.getRed(&red, green: &green, blue: &blue, alpha: &alpha)
self.color = (UInt32(red * 255) << 24) | (UInt32(green * 255) << 16) | (UInt32(blue * 255) << 8) | (UInt32(alpha * 255) << 0)
}
func colors(tolerance: UInt8) -> (RGBA32, RGBA32) {
let red = redComponent
let green = greenComponent
let blue = blueComponent
let alpha = alphaComponent
let redMin = red < tolerance ? 0 : red - tolerance
let greenMin = green < tolerance ? 0 : green - tolerance
let blueMin = blue < tolerance ? 0 : blue - tolerance
let alphaMin = alpha < tolerance ? 0 : alpha - tolerance
let redMax = red > (255 - tolerance) ? 255 : red + tolerance
let greenMax = green > (255 - tolerance) ? 255 : green + tolerance
let blueMax = blue > (255 - tolerance) ? 255 : blue + tolerance
let alphaMax = alpha > (255 - tolerance) ? 255 : alpha + tolerance
return (RGBA32(red: redMin, green: greenMin, blue: blueMin, alpha: alphaMin),
RGBA32(red: redMax, green: greenMax, blue: blueMax, alpha: alphaMax))
}
func between(_ min: RGBA32, _ max: RGBA32) -> Bool {
return
redComponent >= min.redComponent && redComponent <= max.redComponent &&
greenComponent >= min.greenComponent && greenComponent <= max.greenComponent &&
blueComponent >= min.blueComponent && blueComponent <= max.blueComponent
}
static let red = RGBA32(red: 255, green: 0, blue: 0, alpha: 255)
static let green = RGBA32(red: 0, green: 255, blue: 0, alpha: 255)
static let blue = RGBA32(red: 0, green: 0, blue: 255, alpha: 255)
static let white = RGBA32(red: 255, green: 255, blue: 255, alpha: 255)
static let black = RGBA32(red: 0, green: 0, blue: 0, alpha: 255)
static let magenta = RGBA32(red: 255, green: 0, blue: 255, alpha: 255)
static let yellow = RGBA32(red: 255, green: 255, blue: 0, alpha: 255)
static let cyan = RGBA32(red: 0, green: 255, blue: 255, alpha: 255)
static let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Little.rawValue
}
And you'd call it like so:
let resultImage = replaceColor(frontImage: frontImage, backImage: backImage, searchColor: .blue, replacementColor: .green, tolerance: 5)
Resulting in (with the front, back and resulting images, going left to right):
Clearly, you can implement the tolerance
logic however you want, but hopefully this illustrates the idea that excising UIColor
and collection searching can have a dramatic impact on performance.
-
\$\begingroup\$ thanks a lot for saving my time!! The way you described each and every issue and solution to that was more than amazing. Now my code is working like a charm after I followed your instructions and your code snippet. \$\endgroup\$Raja Saad– Raja Saad2021年09月13日 08:16:35 +00:00Commented Sep 13, 2021 at 8:16
contains
call and theUIColor
conversion. As you point out, the "Time Profiler" brings the real issues into stark relief. \$\endgroup\$UIColor
constructor. Nice answer BTW. \$\endgroup\$