Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

2.1 improvements and new features #100

Answered by BenjaminMidtvedt
b-grimaud asked this question in Q&A
Discussion options

Hello,

First of all, thank you very much for your work and for making deep learning tools accessible to non-experts.
I've been experimenting with DeepTrack 2.0 for a while to address the limitations of the algorithmic approach that my team as used so far. I have reached a point where my model is just short of our established methods in terms of performance, and I believe I can still improve it, which is why this new release looks very promising to me.

So far most of my work as been based on your older example for multiple particle tracking, and I've noticed a few differences in the newer one, such as a different way to get the "mask" of particles on simulated data, but, perhaps most importantly, the layers of the UNet aren't the same.

As the changelog suggested that more models were added, I looked into the docs and found several new models.
The most relevant to my application seems to be AutoMultiTracker which, I'm assuming, is paired with the AutoTrackGenerator. However, as far I could tell, it is not covered in the "models" examples.

Another interesting feature is dt.Sequential, as I'm analyzing videos but so far my model has been trained on a series of independent images. However, in the relevant example, a RNN model is used.

Which model would be the most appropriate for multiparticle tracking ?
I apologize if it's something you had planned to cover in your tutorial videos.

As an aside, your "analyzing videos" tutorial breaks on cell 3.4 in Google Colab with TypeError: '<' not supported between instances of 'float' and 'list', let me know if I should open a separate issue.

Thank you very much !

You must be logged in to vote

Hi!

You've spotted a few misses on our side already, and for that we're grateful! You're right that what you found, AutoMultiTracker is our new recommended way of doing multi-particle tracking. However, it has been renamed LodeSTAR in the code. Seems like the documentation has not correctly reflected this. I will make sure to fix it.

LodeSTAR, in turn, does have many examples of how to use it! All can be found in examples/lodestar. There's a also preprint available here: https://arxiv.org/abs/2202.13546. In short, it's a method for training a particle tracker directly on experimental data, without requiring annotations (more than a rough crop of a region containing a single object that yo...

Replies: 3 comments 2 replies

Comment options

Hi!

You've spotted a few misses on our side already, and for that we're grateful! You're right that what you found, AutoMultiTracker is our new recommended way of doing multi-particle tracking. However, it has been renamed LodeSTAR in the code. Seems like the documentation has not correctly reflected this. I will make sure to fix it.

LodeSTAR, in turn, does have many examples of how to use it! All can be found in examples/lodestar. There's a also preprint available here: https://arxiv.org/abs/2202.13546. In short, it's a method for training a particle tracker directly on experimental data, without requiring annotations (more than a rough crop of a region containing a single object that you want to track). It significantly outperforms the old method in terms of sub-pixel accuracy, and usually in detection accuracy as well. However, for highly noisy data (such as yours) the difference in detection quality is smaller.

I would recommend trying it out since the amount of effort required to get it to work is much less than typical tracking methods.

Sequential has existed before 2.1, though it has been improved in terms of efficiency. While RNN-Unet combinations for tracking exist, they are significantly harder to train and optimize. I would not recommend them at this time.

This release also comes with the new model MAGIK which is a deep-learning method for tracing particles using graphs. Preprint can be found here: https://arxiv.org/abs/2202.06355. The lead developer of this project is still optimizing the examples to make sure that users can optimize it for their data as easily as possible. I can let him know to update you whenever they are ready for use!

Thank you for letting me know about the issue with the tutorial. We will look into it!

You must be logged in to vote
0 replies
Answer selected by BenjaminMidtvedt
Comment options

Thank you for the answer !

I've started experimenting with LodeSTAR, and it is already much simpler to get results out of it. However, I'm assuming results can be variable depending on which crop I use for training ? I've also noticed that the model seems to be very unstable when trying with different hyperparameters than those provided in the example notebook (i.e. 30 epochs and batch size of 8), returning the following in Colab :

---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-8-94886cc5b519> in <module>()
----> 1 all_detections = model.predict_and_detect(data)
 2 
 3 for frame, frame_detections in zip(data, all_detections):
 4 plt.imshow(frame)
 5 plt.scatter(frame_detections[:, 1], frame_detections[:, 0], marker="x")
4 frames
/usr/local/lib/python3.7/dist-packages/skimage/morphology/extrema.py in h_maxima(image, h, selem)
 143 
 144 if (h == 0):
--> 145 raise ValueError("h = 0 is ambiguous, use local_maxima() "
 146 "instead?")
 147 
ValueError: h = 0 is ambiguous, use local_maxima() instead?

I will keep optimizing the older UNet examples, but LodeSTAR seems very promising ! I'll run it on my entire dataset and see how it compares in terms of precision of localization and amount of data extracted.

You must be logged in to vote
1 reply
Comment options

Yes, it can vary on the choice of crop. However, this can be mitigated by choosing a few extra crops (~3-5 is usually enough to be consistent). I think example 7 shows how to do this. though it boils down to having a list of crops and choosing one at random with dt.Value(lambda: random.choice(.)).

I had theorised this instability, but never seen it in practice. You can pass mode="constant", cutoff=0.2 to predict_and_detect. This should remove the instability. Then, you can tune cutoff, alpha, beta to optimize performance.

Comment options

Hi again,

Sorry to bring this back up after such a long time, a lack of time and ressources meant that I couldn't get something properly working with DeepTrack.

I've been loosely following the updates of this repo over time, and it's great to see it's being actively maintained.

I just had a couple of questions before diving back into the tutorials and examples :

  1. For multiple particle tracking, would you recommend using LodeSTAR and experimental data or a UNet with simulated particles ?

  2. Does MAGIK require manual annotation for the training dataset ? Is it compatible with particle tracking without any of the "tree" aspect of cell tracking ?

Thanks a lot !

You must be logged in to vote
1 reply
Comment options

No worries!

Depends on your circumstances. LodeSTAR is low effort and gives good performance in most cases. However, knowing how to push the performance can be tricky if the default configuration is not enough. Training on simulated particles is higher effort, especially if the conditions are hard to simulate. However, the path to pushing the performance is clear: make the simulations more like the experiments. LodeSTAR generally gives better sub-pixel positioning, while training on simulations may give better detection in very high noise data. In general, personally, I use LodeSTAR for everything except for what I can trivially simulate (point particles, Mie optics, etc.).

For training from scratch, yes, but we've found that using a pre-trained MAGIK works well in almost all cases. @JesusPinedaC can help with this, but there should be good example demonstrating this. Also, one can toggle off division / merge events to only reconstruct simple trajectories.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet

AltStyle によって変換されたページ (->オリジナル) /