Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Using experimental images of particles for generation of training data #181

Answered by BenjaminMidtvedt
Yennda asked this question in Q&A
Discussion options

Hello,
I am trying to use Deeptrack for processing data containing patterns of particles, which is kind of a routine task. But the patterns can not be simulated using a combination of Scatterer and Optics. Therefore, I thought of taking the pattern of one particle obtained experimentaly and use it for the generation of the whole image. But I am struggling with transforming it (the object nanopars_noise) into the form that could be fed into a model. Is there a standard way, how to implement this?

The example of the code follows:

IMAGE_SIZE = 256
pattern = np.load(file_name + '.npy')
class NanoParticle(Feature):
 
 def get(self, image, position, intensity, **kwargs):
 x, y = pattern.shape
 image[position[0] - x // 2: position[0] + x - x // 2, position[1] - y // 2: position[1] + y - y // 2] = pattern / np.max(pattern) * intensity
 return image
 
nanopar = NanoParticle(
 position=lambda: np.random.randint(IMAGE_SIZE - 20, size = 2) + 10,
 intensity=lambda: np.random.rand()*0.8 + 0.2
)
nanopars = nanopar^lambda: np.random.randint(100)
nanopars_noise = nanopars >> dt.NormalizeMinMax() >> dt.Poisson(snr = lambda: np.random.rand()*20 + 5, background = 0.01) 
input_image = Image(np.zeros((IMAGE_SIZE, IMAGE_SIZE)))
output_image = nanopars_noise.resolve(input_image) 
plt.imshow(output_image, cmap='gray')
You must be logged in to vote

Ok! First, I'd change the pipeline slightly, to:

image_pipeline = dt.Value(lambda: np.zeros((IMAGE_SIZE, IMAGE_SIZE))) >> nanopars_noise

which allows you resolve the pipeline without giving it an argument.

Second, to get the mask, you can do:

# can be any (W, H, 1) shape you want.
particle_mask = np.ones((1, 1, 1))
mask_pipeline = nanopars >> dt.SampleToMasks(
 lambda: lambda image: circle,
 output_region=optics.output_region,
 merge_method="or"
)
 ́ ́ ́
They can be combined and evaluated as follows:
```python
image_and_mask = image_pipeline & mask_pipeline
image, mask = image_and_mask.update().resolve()

image_and_mask can be directly fed to a deeptrack unet, or resolved many ti...

Replies: 1 comment 3 replies

Comment options

Sure! What do you want the network to do? Is it detection?

You must be logged in to vote
3 replies
Comment options

Yes, it is a recognition and counting of many particles present in the image.

Comment options

Ok! First, I'd change the pipeline slightly, to:

image_pipeline = dt.Value(lambda: np.zeros((IMAGE_SIZE, IMAGE_SIZE))) >> nanopars_noise

which allows you resolve the pipeline without giving it an argument.

Second, to get the mask, you can do:

# can be any (W, H, 1) shape you want.
particle_mask = np.ones((1, 1, 1))
mask_pipeline = nanopars >> dt.SampleToMasks(
 lambda: lambda image: circle,
 output_region=optics.output_region,
 merge_method="or"
)
 ́ ́ ́
They can be combined and evaluated as follows:
```python
image_and_mask = image_pipeline & mask_pipeline
image, mask = image_and_mask.update().resolve()

image_and_mask can be directly fed to a deeptrack unet, or resolved many times to create a dataset for any outside network

Answer selected by Yennda
Comment options

Thanks, that helped.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet

AltStyle によって変換されたページ (->オリジナル) /