3
\$\begingroup\$

I have merge two images into one. I already implemented it with some help from the internet but it takes around 2,5s. I'm testing on the simulator so let's take it as a reference.

I currently use UIGraphicsBeingImageContext. Is there any faster way to achieve it?

extension UIImage {
 func overlayWith(image: UIImage, posX: CGFloat, posY: CGFloat, topImageSize: CGSize,
 combinedImage: @escaping (UIImage) -> Void) {
 DispatchQueue.global(qos: .userInteractive).async {
 let newWidth = self.size.width < posX + image.size.width ? posX + image.size.width : self.size.width
 let newHeight = self.size.height < posY + image.size.height ? posY + image.size.height : self.size.height
 let newSize = CGSize(width: newWidth, height: newHeight)
 UIGraphicsBeginImageContextWithOptions(newSize, false, 0.0)
 self.draw(in: CGRect(origin: CGPoint.zero, size: self.size))
 image.draw(in: CGRect(origin: CGPoint(x: posX, y: posY), size: topImageSize))
 let newImage = UIGraphicsGetImageFromCurrentImageContext()!
 UIGraphicsEndImageContext()
 DispatchQueue.main.async {
 combinedImage(newImage)
 }
 }
 }
}
asked Dec 8, 2018 at 12:22
\$\endgroup\$
8
  • \$\begingroup\$ What is the size of the images you are using? \$\endgroup\$ Commented Dec 9, 2018 at 12:42
  • \$\begingroup\$ @Carpsen90 the base image is iPhone image so it is around 3000x4000 and the top image is smaller ~1000x500. I implemented downsampling from WWDC 2018 session 219 (Image and graphics best practices) but it only shrinks down a photo. I tested it on iPhone 7 and it took me ~0,25s so it is almost instantaneous but maybe there are some tricks to speed up this process? \$\endgroup\$ Commented Dec 9, 2018 at 14:00
  • \$\begingroup\$ And btw when I'm combining two photos I get a peak of memory use of about ~170 MB on iPhone 7. \$\endgroup\$ Commented Dec 9, 2018 at 15:28
  • \$\begingroup\$ I've found that this code is almost twice as fast if topImageSize has the same with/height ratio as the size of image. Needed clarification: Could these images have some transparency, could the alpha channel be different from 1.0? \$\endgroup\$ Commented Dec 13, 2018 at 1:07
  • \$\begingroup\$ @Carpsen90 yep, the top image is .png with transparent background but the base image is not - it is just image taken with iPhone camera \$\endgroup\$ Commented Dec 13, 2018 at 8:00

1 Answer 1

1
\$\begingroup\$

On a simulator with the original code, it takes about 1.52s on my machine.

Since the base image won't get resized (self.size is passed in self.draw(in:)), and its alpha channel is always 1, I could gain at least 200ms by using the following :

self.draw(at: CGPoint.zero, blendMode: .copy, alpha: 1)
answered Dec 21, 2018 at 9:12
\$\endgroup\$
1
  • \$\begingroup\$ thanks, cool trick. I managed to drop from 1.13s to 1.07s or even 0.98 on the simulator. Not a major improvement but a welcome one. \$\endgroup\$ Commented Dec 25, 2018 at 21:49

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.