Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

ENH: interface for whitestripe normalization #3351

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Terf wants to merge 5 commits into nipy:master
base: master
Choose a base branch
Loading
from PennSIVE:whitestripe

Conversation

Copy link
Contributor

@Terf Terf commented Jun 29, 2021

Summary

A processing pipeline for MS usually follows these preprocessing steps:

  1. N4 correction
  2. Registration
  3. Skull stripping
  4. Intensity normalization

nipype provides interfaces for tools for all these steps except intensity normalization. The WhiteStripe method is especially well suited for MS brains as lesions can skew mean/variance.

List of changes proposed in this PR (pull-request)

  • Added WhiteStripe interface

Acknowledgment

  • (Mandatory) I acknowledge that this contribution will be available under the Apache 2 license.

Copy link

codecov bot commented Jun 29, 2021
edited
Loading

Codecov Report

Merging #3351 (3a64072) into master (0289137) will decrease coverage by 0.03%.
The diff coverage is 43.10%.

Impacted file tree graph

@@ Coverage Diff @@
## master #3351 +/- ##
==========================================
- Coverage 65.08% 65.05% -0.04% 
==========================================
 Files 307 308 +1 
 Lines 40362 40420 +58 
 Branches 5329 5333 +4 
==========================================
+ Hits 26270 26295 +25 
- Misses 13019 13050 +31 
- Partials 1073 1075 +2 
Flag Coverage Δ
unittests 64.82% <43.10%> (+0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
nipype/interfaces/r.py 83.33% <ø> (ø)
nipype/interfaces/whitestripe.py 43.10% <43.10%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 0289137...3a64072. Read the comment docs.

Copy link
Contributor Author

Terf commented Jul 7, 2021

@effigies It looks like I have some more work to fix the failings CI tests, but I'm wondering if this is something you'd accept into nipype in the first place?

Copy link
Member

effigies commented Jul 7, 2021

Absolutely! Please let me know if you want feedback before working on the tests, or if you want to get things fixed up and then get feedback.

Copy link
Contributor Author

Terf commented Jul 7, 2021

@effigies I’d love some feedback before working on the tests as I’m not sure how stringent they should be. Right now the test runs whitestripe on a nifti downloaded from a public dataset, then simply checks the output file got created. A more stringent test might actually compare the intensities of the normalized vs original image, but I’m not sure whether our tests should be for whitestripe itself, or just testing that the interface generates the correct command-line string. I think a robust test would actually run whitestripe (maybe in addition to checking the command-line string?), but that’s complicated by the fact that neither R nor the whitestripe R package are installed in the official nipype docker image. As the whitestripe interface isn’t usable unless whitestripe is installed, perhaps as a first step I should submit a PR to create (perhaps a separate?) container image with R and whitestripe installed?

Copy link
Member

effigies commented Jul 7, 2021

I’m not sure whether our tests should be for whitestripe itself, or just testing that the interface generates the correct command-line string

Generally nipype concerns itself with the command-line, if we're not implementing the algorithms ourselves. There may be some rare cases where we do exercise the underlying tools, but that's typically going to be when the operations are pretty cheap.

As the whitestripe interface isn’t usable unless whitestripe is installed, perhaps as a first step I should submit a PR to create (perhaps a separate?) container image with R and whitestripe installed?

If you want to install R and whitestripe, then go ahead and do it in this PR.

Copy link
Contributor Author

Terf commented Jul 9, 2021

@effigies It seems one of the checks is failing because the auto test, nipype/interfaces/tests/test_auto_WhiteStripe.py doesn't exist. I tried running make specs but that didn't seem to generate the file? I also tried running everything with just make and got a segmentation fault from sphinx, not sure if that's just a problem with my environment though.

But otherwise I rewrote the test (which passes for me locally) and added R and whitestripe to the nipype docker image. One thing to note is the main image is now 6.98GB on my machine, I'm not sure how much of a size reduction you get by compressing layers afterwards but R is a hefty dependency and so are the neuroimaging packages required by whitestripe. Maybe it would make sense to offer a "slim" version of the image without R, and a "full" version with everything?

Copy link
Member

I ran make specs and pushed the new file to 3a64072. If you fetch from the main repository, you should be able to merge it into your branch.

Terf reacted with thumbs up emoji

Copy link
Contributor Author

Terf commented Jul 19, 2021

@effigies looks like all the tests are passing now.

On a somewhat unrelated note, I wanted to get your input / approval on another interface I wanted to add for the mimosa lesion segmentation method. Since the training requires manually segmented images I imagine nearly everyone will use the built-in models my lab has trained, but I think including the training step in the interface could still be useful — but should the training and test parts be bundled into the same interface, or should there be two interfaces for training new model / using that model to test new data?

I also wanted to write a tutorial (like the ones here) showing how to segment MS lesions by building a nipype workflow that uses ANTS for bias correction / skull striping, whitestripe for normalization and then finally mimosa. I’m a bit confused where the source files are for the documentation are though — I looked through the docs but don’t see the many articles listed under the User Guide and Examples section of the website?

@effigies effigies mentioned this pull request Oct 15, 2021
15 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Reviewers
No reviews
Assignees
No one assigned
Labels
None yet
Projects
None yet
Milestone
No milestone
Development

Successfully merging this pull request may close these issues.

2 participants

AltStyle によって変換されたページ (->オリジナル) /