-
Notifications
You must be signed in to change notification settings - Fork 58
Using experimental images of particles for generation of training data #181
-
Hello,
I am trying to use Deeptrack for processing data containing patterns of particles, which is kind of a routine task. But the patterns can not be simulated using a combination of Scatterer and Optics. Therefore, I thought of taking the pattern of one particle obtained experimentaly and use it for the generation of the whole image. But I am struggling with transforming it (the object nanopars_noise) into the form that could be fed into a model. Is there a standard way, how to implement this?
The example of the code follows:
IMAGE_SIZE = 256 pattern = np.load(file_name + '.npy') class NanoParticle(Feature): def get(self, image, position, intensity, **kwargs): x, y = pattern.shape image[position[0] - x // 2: position[0] + x - x // 2, position[1] - y // 2: position[1] + y - y // 2] = pattern / np.max(pattern) * intensity return image nanopar = NanoParticle( position=lambda: np.random.randint(IMAGE_SIZE - 20, size = 2) + 10, intensity=lambda: np.random.rand()*0.8 + 0.2 ) nanopars = nanopar^lambda: np.random.randint(100) nanopars_noise = nanopars >> dt.NormalizeMinMax() >> dt.Poisson(snr = lambda: np.random.rand()*20 + 5, background = 0.01) input_image = Image(np.zeros((IMAGE_SIZE, IMAGE_SIZE))) output_image = nanopars_noise.resolve(input_image) plt.imshow(output_image, cmap='gray')
Beta Was this translation helpful? Give feedback.
All reactions
Ok! First, I'd change the pipeline slightly, to:
image_pipeline = dt.Value(lambda: np.zeros((IMAGE_SIZE, IMAGE_SIZE))) >> nanopars_noise
which allows you resolve the pipeline without giving it an argument.
Second, to get the mask, you can do:
# can be any (W, H, 1) shape you want. particle_mask = np.ones((1, 1, 1)) mask_pipeline = nanopars >> dt.SampleToMasks( lambda: lambda image: circle, output_region=optics.output_region, merge_method="or" ) ́ ́ ́ They can be combined and evaluated as follows: ```python image_and_mask = image_pipeline & mask_pipeline image, mask = image_and_mask.update().resolve()
image_and_mask can be directly fed to a deeptrack unet, or resolved many ti...
Replies: 1 comment 3 replies
-
Sure! What do you want the network to do? Is it detection?
Beta Was this translation helpful? Give feedback.
All reactions
-
Yes, it is a recognition and counting of many particles present in the image.
Beta Was this translation helpful? Give feedback.
All reactions
-
Ok! First, I'd change the pipeline slightly, to:
image_pipeline = dt.Value(lambda: np.zeros((IMAGE_SIZE, IMAGE_SIZE))) >> nanopars_noise
which allows you resolve the pipeline without giving it an argument.
Second, to get the mask, you can do:
# can be any (W, H, 1) shape you want. particle_mask = np.ones((1, 1, 1)) mask_pipeline = nanopars >> dt.SampleToMasks( lambda: lambda image: circle, output_region=optics.output_region, merge_method="or" ) ́ ́ ́ They can be combined and evaluated as follows: ```python image_and_mask = image_pipeline & mask_pipeline image, mask = image_and_mask.update().resolve()
image_and_mask can be directly fed to a deeptrack unet, or resolved many times to create a dataset for any outside network
Beta Was this translation helpful? Give feedback.
All reactions
-
Thanks, that helped.
Beta Was this translation helpful? Give feedback.