Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
/ RESPAN Public
forked from lahammond/RESPAN

Robust, accurate, and unbiased quantification of neurons and dendritic spines.

License

Notifications You must be signed in to change notification settings

imcf/RESPAN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

History

86 Commits

Repository files navigation

RESPAN: Restoration Enhanced SPine And Neuron Analysis.

RESPAN

RESPAN is an end‐to‐end, GPU‐accelerated pipeline that restores, segments, and quantifies dendrites and dendritic spines in fluorescent microscopy images in a robust, accurate, and unbiased manner. While developing this pipeline, emphasis was placed on ensuring an efficient and accessible pipeline that leverages the latest advancements in content‐aware restoration, image segmentation, and GPU processing. For ease of use, RESPAN is available as both (i) a ready‐to‐run Windows application and (ii) Python scripts. Please note that this software requires a computer with an NVIDIA GPU. Developed in collaboration with the Polleux Lab (Zuckerman Institute, Columbia University).

RESPAN


✨ Key features

  • All‐in‐one workflow – restoration → segmentation → quantification → validation in a single interface.
  • True 3D analysis – every stage uses volumetric data.
  • In‐vivo spine analysis – robust to low SNR in two‐photon datasets and challenging samples.
  • Model training from the GUI – train/finetune nnU‐Net, CARE‐3D or SelfNet without code.
  • Comprehensive and Automatic Results – automatic generation of validation MIPs, 3D volumes, and comprehensive spatial/morphological statistics.
  • Built‐in validation – compare ground truth datasets to RESPAN outputs to validate quantification.
  • Step-by-step tutorials - view our introduction and tutorials for analysis and model training here
  • Stand‐alone or scriptable – run the GUI on Windows or from a Python environment.
  • Lossless compression – gzip lossless compression of data ensures a minimal footprint for generated results and validation images.

💻 System requirements

Minimum Recommended Recommended
OS Windows 10/11 ×ばつ64 Windows 10/11 ×ばつ64
GPU NVIDIA ≥ 8 GB VRAM NVIDIA RTX 4090 (24 GB)
RAM 32 GB 128–256 GB
Storage HDD SSD

*RESPAN should work for NVIDIA GPUs with less than 8GB, but this has not been tested.
*RESPAN implements data chunking and tiling, but for some steps, larger images currently necessitate increased RAM requirements.
*Please refer to the table at the end of this document for further performance testing information.


🚀 Quick start (Windows GUI)

If you need help getting started, please refer to our video tutorial. Chapters linked below:
- Introduction to RESPAN and Image Segmentation
- Installing RESPAN
- Navigating the RESPAN GUI
- Example use of RESPAN
- Understanding RESPAN Outputs
- Training CARE Models in RESPAN
- Training SelfNet Models in RESPAN
- Using Restoration Models during RESPAN Analysis
- Training an nnU-Net Model using RESPAN

  1. Download
    • Latest RESPAN release (RESPAN v1.0 - 9/16/2025) → Windows Application (if required, previous versions of RESPAN can be found in our archive here)
    • RESPAN Analysis Settings file → here
    • Pre‐trained models → see Segmentation Models table below
    • For testing, we also provide example spinning disk confocal datasets with example results
  2. Install
    ▸ Unzip RESPAN.zip with 7zip
    ▸ Double‐click RESPAN.exe (first run may require 1-2 min to initialize)
  3. Prepare your data
    MyExperiment/
    ├── Animal_A/
    │ ├── dendrite0.tif
    │ ├── dendrite1.tif 
    │ └── Analysis_Settings.yml (example file provided in the download link above) 
    └── Animal_B/
     ├── dendrite0.tif
     ├── ... 
     └── Analysis_Settings.yml
    
    *Copy Analysis_Settings.yml into every sub‐folder (stores resolution, advanced settings, and allows batch processing. Default settings suit most experiments, with editing only required when using advanced functionality and image restoration).
  4. Run
    • Select the parent folder (e.g. "MyExperiment") in the GUI
    • Update analysis settings • Click Run – a 100 MB stack processes in ≈3 min on an RTX 4090
  5. Inspect outputs
    Folder Contents
    Tables/ Per‐image CSVs (Detected_spines_*.csv) + experiment summary
    Validation_Data/ MIPs & volumes for QA (input, labels, skeleton, distance)
    SWC_files/ Neuron/dendrite traces from Vaa3D
    Spine_Arrays/ Cropped 2D maximum intensity projections and 3D stacks centered around every spine

🖼️ Input data & considerations

  • File format – RESPAN currently accepts 2D/3D TIFF files.
  • Conversion macro – use the supplied Fiji + OMERO‐BioFormats macro to batch‐convert ND2/CZI/LIF, etc.
  • Model specificity – image‐restoration models (CARE & SelfNet) must match the modality & resolution being analyzed; mismatches can hallucinate or erase features. We strongly encourage retraining specific models for the microscope, objective, and resolution being used. RESPAN adapts input data to our pretrained segmentation models, and good results are likely without retraining, but we recommend using these first-pass results to fine-tune or train application-specific models
  • Zarr support – Internally, RESPAN has added OME-Zarr generation to support larger datasets, with future updates intending to utilize these files with Dask.

🛠️ Advanced usage: training new models

Task GUI Tab Typical time Tutorial link
Segmentation (nnU‐Net) nnU‐Net Training 12–24 h tutorial
Image restoration (CARE‐3D) CARE Training 3–5 h tutorial
Axial resolution (SelfNet) SelfNet Training ≤2 h tutorial

Detailed protocols – including data organisation and annotation tips – are in the User Guide.


🎯 Pre‐trained segmentation models

Segmentation Model Download Modality Resolution Annotations Details
Model 1A download Spinning disk and Airyscan/laser scanning confocal microscopy 65 x 65 x 150nm spines, dendrites, and soma 224 datasets, including restored and raw data and additional augmentation
Model 1B download Spinning disk and Airyscan/laser scanning confocal microscopy 65 x 65 x 150nm spines core & shell, dendrites, axons, and soma 44 datasets, including restored and raw data and additional augmentation
Model 2 download Spinning disk confocal microscopy 65 x 65 x 65nm spines, necks, dendrites, and soma isotropic model, 7 datasets, no augmentation
Model 3 download Two-photon in vivo confocal microscopy 102 x 102 x 1000nm spines and dendrites 908 datasets, additional augmentation

For detailed protocols using RESPAN, please refer to our manuscript.


🔍 Validation workflow

This procedure guides you through validating RESPAN's segmentation outputs against a ground truth dataset. If you have not generated a ground truth annotation dataset, please refer to the notes below on creating annotations as a guide on how to generate these annotations for your specific datasets before you proceed. CRITICAL: Ground truth annotations and the corresponding raw data volumes intended for validation testing should not be used in the training of nnU-Net models they are intended to test.

  1. Open the Analysis Validation tab.
  2. Select the "analysis output directory" - this is the Validation_Data\Segmentation_labels folder created by RESPAN during analysis
  3. Select the "ground truth data directory" - this is a folder containing ground truth annotations for the data analyzed by RESPAN
  4. Adjust detection thresholds if needed
  5. Click Run.
  6. Metrics are saved to Analysis_Evaluation.csv.

📚 Publications

If RESPAN assisted your research, please cite our work using the reference below: If you use RESPAN as part of your research, please cite our work using the reference below:

Sergio B. Garcia, Alexa P. Schlotter, Daniela Pereira, Franck Polleux, Luke A. Hammond. (2024) RESPAN: An Automated Pipeline for Accurate Dendritic Spine Mapping with Integrated Image Restoration. bioRxiv. doi: https://doi.org/10.1101/2024.06.06.597812

RESPAN is already supporting peer-reviewed studies:

  • Baptiste Libé-Philippot, Ryohei Iwata, Aleksandra J. Recupero, Keimpe Wierda, Sergio Bernal Garcia, Luke Hammond, Anja van Benthem, Ridha Limame, Martyna Ditkowska, Sofie Beckers, Vaiva Gaspariunaite, Eugénie Peze-Heidsieck, Daan Remans, Cécile Charrier, Tom Theys, Franck Polleux, Pierre Vanderhaeghen (2024) Synaptic neoteny of human cortical neurons requires species-specific balancing of SRGAP2-SYNGAP1 cross-inhibition. Neuron. https://doi.org/10.1016/j.neuron.2024年08月02日1.

🛠️ Advanced usage: creating environments for use in Python

Main development environment:

  1. mamba create -n respandev python=3.9 scikit-image pandas "numpy=1.23.4" nibabel pyinstaller ipython pyyaml pynvml numba dask dask-image ome-zarr zarr memory_profiler trimesh -c conda-forge -c nvidia -y
  2. conda activate respandev3
  3. pip install "scipy==1.13.1" "tensorflow<2.11" csbdeep pyqt5 "cupy-cuda11x==13.2.0" "patchify==0.2.3

Secondary environment:

  1. mamba create -n respaninternal python=3.9 pytorch torchvision pytorch-cuda=12.1 scikit-image opencv -c pytorch -c nvidia -y
  2. git clone -b v2.3.1 https://github.com/MIC-DKFZ/nnUNet.git
  3. cd to that repo dir then pip install -e ./nnUNet

🛠️ Future Developments

  • Our latest model uses 3D spine cores and membranes to further improve accuracy in dense environments
  • Integration of Dask to remove resource limitations on processing large datasets
  • Improved efficiency in batch GPU mesh measurements, neck generation, and geodesic distance measurements

Benchmark: Processing and Training Times by System Configuration

System CPU RAM (GB) GPU Storage CARE Training
(10 epochs, min)
SelfNet Training
(×ばつ10MB, 40 epochs, min)
nnUNet (min)
10MB
100MB 500MB 1GB 2.5GB RESPAN (min, GPU)
10MB
100MB 500MB 1GB 2.5GB
Mid-performance i9-11900K (8-core, 3.5 GHz) 64 RTX 3070 (8GB) Patriot M.2 P300 1TB 11.7 5 0.14 1.39 6.35 16 32.43 0.44 1.62 6.28 7.76 18.23
High-performance Threadripper PRO (16-core, 4.0 GHz) 256 RTX 4090 (24GB) Samsung M.2 SSD 1.92TB 3.5 1.5 0.14 1.39 6.35 14 32.43 0.26 2.33 8.91 14.07 26.62

About

Robust, accurate, and unbiased quantification of neurons and dendritic spines.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.3%
  • ImageJ Macro 1.7%

AltStyle によって変換されたページ (->オリジナル) /