Skip to main content
Code Review

Return to Answer

added 48 characters in body
Source Link
Cris Luengo
  • 7k
  • 1
  • 14
  • 37

I’m not sure how the name applies, you’re not looking at colors, you’re looking at the difference between the largest value and the smallest one, could be the large value of tgethe green channel in a pixel, and the the small value of the red channel in the same pixel. For example an image that is completely green willwould have a large color difference according to this measure, even though all pixels have the same color.

I’m not sure how the name applies, you’re not looking at colors, you’re looking at the difference between the largest value and the smallest one, could be the large value of tge green channel in pixel and the the small value of the red channel in the same pixel. For example an image that is completely green will have a large color difference according to this measure.

I’m not sure how the name applies, you’re not looking at colors, you’re looking at the difference between the largest value and the smallest one, could be the large value of the green channel in a pixel, and the the small value of the red channel in the same pixel. For example an image that is completely green would have a large color difference according to this measure, even though all pixels have the same color.

Source Link
Cris Luengo
  • 7k
  • 1
  • 14
  • 37

Since you already have a review of the code, I’ll look at the image processing specifics.

sharpness = cv2.Laplacian(np.array(image), cv2.CV_64F).var()

The variance of the Laplacian is not necessary related to sharpness. A noisy image will have a larger value than a noise-free image, even if equally sharp. An image with a larger (flat) background will have a lower value, even if perfectly in focus.

There is no good way to estimate sharpness without knowing what was imaged. What you have is a proxy that correlates with sharpness for sufficiently similar images, but is not valid in general as a measure of sharpness.

color_difference = np.max(np.array(image)) - np.min(np.array(image))

I’m not sure how the name applies, you’re not looking at colors, you’re looking at the difference between the largest value and the smallest one, could be the large value of tge green channel in pixel and the the small value of the red channel in the same pixel. For example an image that is completely green will have a large color difference according to this measure.

If you want to compute the largest difference in colors, compute the Euclidean distance between each pair of pixels (n^2 comparisons if the image has n pixels), preferably in a color space such as Lab, then pick the largest result.

color_histogram = np.histogram(np.array(image), bins=256, range=(0, 255))

Again, this is a histogram of values where you combine all channels. I would expect you to compute three histograms (one for each channel), or a single 3D histogram (an actual color histogram).

color_saturation = np.mean(color_histogram[0]) / 255

The histogram contains counts of the pixels for each intensity. The mean of these counts is always the number of pixels divided by 256 (the number of bins). So this quantifies the image size.

To measure saturation, convert each pixel to a saturation value, for example by converting to HSV color space and taking the S channel, then compute the mean of these values.

image_edge_detection = np.mean(cv2.Laplacian(np.array(image), cv2.CV_64F))

The mean of the Laplacian I guess is close to 0 for an image with larger flat areas and transitions between them. Only thin lines (ridges) would increase or decrease the mean value, depending on their color — dark and bright lines would cancel out in this measure. I’m not sure what name you should give this, but it’s not related to edges.

Note that you are computing the Laplacian here again, you should re-use the earlier result.

image_noise = np.var(np.array(image))

You computed the standard deviation earlier, and called it contrast. The variance is the square of the standard deviation, how is that noise?

To estimate noise, first identify flat regions in the image, then compute their variance. For example the function dip.EstimateNoiseVariance() in DIPlib does this (disclosure: I’m an author of DIPlib).

You look for two clusters using k-means, then:

foreground_label = np.argmax(np.bincount(labels))
foreground_background_similarity = np.mean(np.abs(cluster_centers[foreground_label] - cluster_centers[1 - foreground_label]))

First of all, that second line is quite long. Try to break it up across lines, or do part of the computation in a separate statement.

But more importantly, you first assume that the larger cluster is the foreground, even though in the example you give the background is clearly larger. Then you go through great lengths to find the other cluster, subtract the two centroids, and take the absolute value of the result. You’d get the same result no matter which order you pick for these centroids. So you can thus simply do:

diff = cluster_centers[0] - cluster_centers[1]
foreground_background_similarity = np.mean(np.abs(diff))
lang-py

AltStyle によって変換されたページ (->オリジナル) /