Skip to main content
Code Review

Return to Answer

Grammar; clarifications
Source Link
Rob
  • 2.7k
  • 16
  • 27
  1. In UIGraphicsImageRenderer, make sure to set the scale in the UIGraphicsImageRendererFormat to be the scale of the image. Usually it is 1.0, unless it is a screen snapshot, in which case it is the scale of the device that it was snapshotted from. Bottom line, UIGraphicsImageRenderer will default to the scale of the device, which may make your image much bigger than you intended. E.g., take a ×ばつ400 px image, and the renderer on a ×ばつ device will make it ×ばつ1200 with no additional data, which is the exact opposite of what you obviously intend. The correct scale is a function of the image in question, not the device.

  2. Do you have access to the original Data associated with this asset? (Note, I am not asking about the output of pngData or jpegData, but the raw data of the original asset.) E.g., a photo taken with a camera generally has decent JPEG compression with it already, and round-tripping it through UIImage and then adding JPEG compression like 0.9 can actually simultaneously lose data and make it bigger. Bottom line, make the decision to downscale/compress only if the original raw asset demands it.

  3. Once you decide that the original asset is really too big, on the array of JPEG compression rates, you should probably remove the 1.0 scale from the list, as that will make the asset huge with absolutely no image improvement. I think 0.8 is a fine starting point. 0.9 if you want to be conservative. Try it out and you will see what I mean.

    Bottom line 1.0 compression makes it much bigger. 0.7-0.8 results in barely visible JPEG artifacts, and it falls apart quickly below 0.6, IMHO.


Below you mention that you are using UIImagePickerController. If so, consider the following:


let formatter: NumberFormatter = {
 let formatter = NumberFormatter()
 formatter.numberStyle = .decimal
 return formatter
}()
extension ViewController: UIImagePickerControllerDelegate, UINavigationControllerDelegate {
 func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
 picker.dismiss(animated: true)
 guard let asset = info[.phAsset] as? PHAsset else { return }
 asset.requestContentEditingInput(with: nil) { [self] input, info in
 guard
 let fileURL = input?.fullSizeImageURL,
 let data = try? Data(contentsOf: fileURL),
 let image = UIImage(data: data),
 let data1 = image.jpegData(compressionQuality: 1),
 let data9 = image.jpegData(compressionQuality: 0.9),
 let data8 = image.jpegData(compressionQuality: 0.8),
 let data7 = image.jpegData(compressionQuality: 0.7),
 let data6 = image.jpegData(compressionQuality: 0.6)
 else { return }
 print("original", formatter.string(for: data.count)!) // 2,227,880
 print(1.0, formatter.string(for: data1.count)!) // 6,242,371
 print(0.9, formatter.string(for: data9.count)!) // 3,672,570
 print(0.8, formatter.string(for: data8.count)!) // 3,004,577
 print(0.7, formatter.string(for: data7.count)!) // 2,576,892
 print(0.6, formatter.string(for: data6.count)!) // 1,958,503
 }
 }
}

I am obviously not suggesting the above implementation in your code, but rather merely illustrating that (a) one way you can fetch the original asset; and (b) to show how its size compares to jpegData of various compression quality settings after round-tripping to a UIImage. Obviously, my numbers are from a random image in my photo library, and your values will vary, but the above sizes are entirely consistent with my historical experiments with various compression settings for JPEGs.

Perhaps needless to say, if accessing the photos library, you must request permission:

PHPhotoLibrary.requestAuthorization { granted in
 print(granted)
}

And set NSPhotoLibraryUsageDescription in the Info.plist.

  1. In UIGraphicsImageRenderer, make sure to set the scale in the UIGraphicsImageRendererFormat to be the scale of the image. Usually it is 1.0, unless it is a screen snapshot, in which case it is the scale of the device that it was snapshotted from. Bottom line, UIGraphicsImageRenderer will default to the scale of the device, which may make your image much bigger than you intended. E.g., take a ×ばつ400 px image, and the renderer on a ×ばつ will make it ×ばつ1200 with no additional data, which is the exact opposite of what you obviously intend.

  2. Do you have access to the original Data associated with this asset? (Note, I am not asking about the output of pngData or jpegData, but the raw data of the original asset.) E.g., a photo taken with a camera generally has decent JPEG compression with it already, and round-tripping it through UIImage and then adding JPEG compression like 0.9 can actually simultaneously lose data and make it bigger. Bottom line, make the decision to downscale/compress only if the original raw asset demands it.

  3. Once you decide that the original asset is really too big, on the array of JPEG compression rates, you should probably remove the 1.0 scale from the list, as that will make the asset huge with absolutely no image improvement. I think 0.8 is a fine starting point. 0.9 if you want to be conservative. Try it out and you will see what I mean.

    Bottom line 1.0 compression makes it much bigger. 0.7-0.8 results in barely visible JPEG artifacts, and it falls apart quickly below 0.6, IMHO.

  1. In UIGraphicsImageRenderer, make sure to set the scale in the UIGraphicsImageRendererFormat to be the scale of the image. Usually it is 1.0, unless it is a screen snapshot, in which case it is the scale of the device that it was snapshotted from. Bottom line, UIGraphicsImageRenderer will default to the scale of the device, which may make your image much bigger than you intended. E.g., take a ×ばつ400 px image, and the renderer on a ×ばつ device will make it ×ばつ1200 with no additional data, which is the exact opposite of what you obviously intend. The correct scale is a function of the image in question, not the device.

  2. Do you have access to the original Data associated with this asset? (Note, I am not asking about the output of pngData or jpegData, but the raw data of the original asset.) E.g., a photo taken with a camera generally has decent JPEG compression with it already, and round-tripping it through UIImage and then adding JPEG compression like 0.9 can actually simultaneously lose data and make it bigger. Bottom line, make the decision to downscale/compress only if the original raw asset demands it.

  3. Once you decide that the original asset is really too big, on the array of JPEG compression rates, you should probably remove the 1.0 scale from the list, as that will make the asset huge with absolutely no image improvement. I think 0.8 is a fine starting point. 0.9 if you want to be conservative. Try it out and you will see what I mean.

    Bottom line 1.0 compression makes it much bigger. 0.7-0.8 results in barely visible JPEG artifacts, and it falls apart quickly below 0.6, IMHO.


Below you mention that you are using UIImagePickerController. If so, consider the following:


let formatter: NumberFormatter = {
 let formatter = NumberFormatter()
 formatter.numberStyle = .decimal
 return formatter
}()
extension ViewController: UIImagePickerControllerDelegate, UINavigationControllerDelegate {
 func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
 picker.dismiss(animated: true)
 guard let asset = info[.phAsset] as? PHAsset else { return }
 asset.requestContentEditingInput(with: nil) { [self] input, info in
 guard
 let fileURL = input?.fullSizeImageURL,
 let data = try? Data(contentsOf: fileURL),
 let image = UIImage(data: data),
 let data1 = image.jpegData(compressionQuality: 1),
 let data9 = image.jpegData(compressionQuality: 0.9),
 let data8 = image.jpegData(compressionQuality: 0.8),
 let data7 = image.jpegData(compressionQuality: 0.7),
 let data6 = image.jpegData(compressionQuality: 0.6)
 else { return }
 print("original", formatter.string(for: data.count)!) // 2,227,880
 print(1.0, formatter.string(for: data1.count)!) // 6,242,371
 print(0.9, formatter.string(for: data9.count)!) // 3,672,570
 print(0.8, formatter.string(for: data8.count)!) // 3,004,577
 print(0.7, formatter.string(for: data7.count)!) // 2,576,892
 print(0.6, formatter.string(for: data6.count)!) // 1,958,503
 }
 }
}

I am obviously not suggesting the above implementation in your code, but rather merely illustrating that (a) one way you can fetch the original asset; and (b) to show how its size compares to jpegData of various compression quality settings after round-tripping to a UIImage. Obviously, my numbers are from a random image in my photo library, and your values will vary, but the above sizes are entirely consistent with my historical experiments with various compression settings for JPEGs.

Perhaps needless to say, if accessing the photos library, you must request permission:

PHPhotoLibrary.requestAuthorization { granted in
 print(granted)
}

And set NSPhotoLibraryUsageDescription in the Info.plist.

Source Link
Rob
  • 2.7k
  • 16
  • 27

A couple of quick observations:

  1. In UIGraphicsImageRenderer, make sure to set the scale in the UIGraphicsImageRendererFormat to be the scale of the image. Usually it is 1.0, unless it is a screen snapshot, in which case it is the scale of the device that it was snapshotted from. Bottom line, UIGraphicsImageRenderer will default to the scale of the device, which may make your image much bigger than you intended. E.g., take a ×ばつ400 px image, and the renderer on a ×ばつ will make it ×ばつ1200 with no additional data, which is the exact opposite of what you obviously intend.

  2. Do you have access to the original Data associated with this asset? (Note, I am not asking about the output of pngData or jpegData, but the raw data of the original asset.) E.g., a photo taken with a camera generally has decent JPEG compression with it already, and round-tripping it through UIImage and then adding JPEG compression like 0.9 can actually simultaneously lose data and make it bigger. Bottom line, make the decision to downscale/compress only if the original raw asset demands it.

  3. Once you decide that the original asset is really too big, on the array of JPEG compression rates, you should probably remove the 1.0 scale from the list, as that will make the asset huge with absolutely no image improvement. I think 0.8 is a fine starting point. 0.9 if you want to be conservative. Try it out and you will see what I mean.

    Bottom line 1.0 compression makes it much bigger. 0.7-0.8 results in barely visible JPEG artifacts, and it falls apart quickly below 0.6, IMHO.

default

AltStyle によって変換されたページ (->オリジナル) /