- All‐in‐one workflow – restoration → segmentation → quantification → validation in a single interface.
- True 3D analysis – every stage uses volumetric data.
- In‐vivo spine analysis – robust to low SNR in two‐photon datasets and challenging samples.
- Model training from the GUI – train/finetune nnU‐Net, CARE‐3D or SelfNet without code.
- Comprehensive and Automatic Results – automatic generation of validation MIPs, 3D volumes, and comprehensive spatial/morphological statistics.
- Built‐in validation – compare ground truth datasets to RESPAN outputs to validate quantification.
- Step-by-step tutorials - view our introduction and tutorials for analysis and model training here
- Stand‐alone or scriptable – run the GUI on Windows or from a Python environment.
- Lossless compression – gzip lossless compression of data ensures a minimal footprint for generated results and validation images.
| Minimum Recommended | Recommended | |
|---|---|---|
| OS | Windows 10/11 ×ばつ64 | Windows 10/11 ×ばつ64 |
| GPU | NVIDIA ≥ 8 GB VRAM | NVIDIA RTX 4090 (24 GB) |
| RAM | 32 GB | 128–256 GB |
| Storage | HDD | SSD |
*RESPAN should work for NVIDIA GPUs with less than 8GB, but this has not been tested.
*RESPAN implements data chunking and tiling, but for some steps, larger images currently necessitate increased RAM requirements.
*Please refer to the table at the end of this document for further performance testing information.
If you need help getting started, please refer to our video tutorial. Chapters linked below:
- Introduction to RESPAN and Image Segmentation
- Installing RESPAN
- Navigating the RESPAN GUI
- Example use of RESPAN
- Understanding RESPAN Outputs
- Training CARE Models in RESPAN
- Training SelfNet Models in RESPAN
- Using Restoration Models during RESPAN Analysis
- Training an nnU-Net Model using RESPAN
- Download
• Latest RESPAN release (RESPAN v1.0 - 9/16/2025) → Windows Application (if required, previous versions of RESPAN can be found in our archive here)
• RESPAN Analysis Settings file → here
• Pre‐trained models → see Segmentation Models table below
• For testing, we also provide example spinning disk confocal datasets with example results - Install
▸ Unzip RESPAN.zip with 7zip
▸ Double‐click RESPAN.exe (first run may require 1-2 min to initialize) - Prepare your data
*Copy Analysis_Settings.yml into every sub‐folder (stores resolution, advanced settings, and allows batch processing. Default settings suit most experiments, with editing only required when using advanced functionality and image restoration).
MyExperiment/ ├── Animal_A/ │ ├── dendrite0.tif │ ├── dendrite1.tif │ └── Analysis_Settings.yml (example file provided in the download link above) └── Animal_B/ ├── dendrite0.tif ├── ... └── Analysis_Settings.yml - Run
• Select the parent folder (e.g. "MyExperiment") in the GUI
• Update analysis settings • Click Run – a 100 MB stack processes in ≈3 min on an RTX 4090 - Inspect outputs
Folder Contents Tables/Per‐image CSVs ( Detected_spines_*.csv) + experiment summaryValidation_Data/MIPs & volumes for QA (input, labels, skeleton, distance) SWC_files/Neuron/dendrite traces from Vaa3D Spine_Arrays/Cropped 2D maximum intensity projections and 3D stacks centered around every spine
- File format – RESPAN currently accepts 2D/3D TIFF files.
- Conversion macro – use the supplied Fiji + OMERO‐BioFormats macro to batch‐convert ND2/CZI/LIF, etc.
- Model specificity – image‐restoration models (CARE & SelfNet) must match the modality & resolution being analyzed; mismatches can hallucinate or erase features. We strongly encourage retraining specific models for the microscope, objective, and resolution being used. RESPAN adapts input data to our pretrained segmentation models, and good results are likely without retraining, but we recommend using these first-pass results to fine-tune or train application-specific models
- Zarr support – Internally, RESPAN has added OME-Zarr generation to support larger datasets, with future updates intending to utilize these files with Dask.
| Task | GUI Tab | Typical time | Tutorial link |
|---|---|---|---|
| Segmentation (nnU‐Net) | nnU‐Net Training | 12–24 h | tutorial |
| Image restoration (CARE‐3D) | CARE Training | 3–5 h | tutorial |
| Axial resolution (SelfNet) | SelfNet Training | ≤2 h | tutorial |
Detailed protocols – including data organisation and annotation tips – are in the User Guide.
| Segmentation Model | Download | Modality | Resolution | Annotations | Details |
|---|---|---|---|---|---|
| Model 1A | download | Spinning disk and Airyscan/laser scanning confocal microscopy | 65 x 65 x 150nm | spines, dendrites, and soma | 224 datasets, including restored and raw data and additional augmentation |
| Model 1B | download | Spinning disk and Airyscan/laser scanning confocal microscopy | 65 x 65 x 150nm | spines core & shell, dendrites, axons, and soma | 44 datasets, including restored and raw data and additional augmentation |
| Model 2 | download | Spinning disk confocal microscopy | 65 x 65 x 65nm | spines, necks, dendrites, and soma | isotropic model, 7 datasets, no augmentation |
| Model 3 | download | Two-photon in vivo confocal microscopy | 102 x 102 x 1000nm | spines and dendrites | 908 datasets, additional augmentation |
For detailed protocols using RESPAN, please refer to our manuscript.
This procedure guides you through validating RESPAN's segmentation outputs against a ground truth dataset. If you have not generated a ground truth annotation dataset, please refer to the notes below on creating annotations as a guide on how to generate these annotations for your specific datasets before you proceed. CRITICAL: Ground truth annotations and the corresponding raw data volumes intended for validation testing should not be used in the training of nnU-Net models they are intended to test.
- Open the Analysis Validation tab.
- Select the "analysis output directory" - this is the
Validation_Data\Segmentation_labelsfolder created by RESPAN during analysis - Select the "ground truth data directory" - this is a folder containing ground truth annotations for the data analyzed by RESPAN
- Adjust detection thresholds if needed
- Click Run.
- Metrics are saved to
Analysis_Evaluation.csv.
If RESPAN assisted your research, please cite our work using the reference below: If you use RESPAN as part of your research, please cite our work using the reference below:
Sergio B. Garcia, Alexa P. Schlotter, Daniela Pereira, Franck Polleux, Luke A. Hammond. (2024) RESPAN: An Automated Pipeline for Accurate Dendritic Spine Mapping with Integrated Image Restoration. bioRxiv. doi: https://doi.org/10.1101/2024.06.06.597812RESPAN is already supporting peer-reviewed studies:
- Baptiste Libé-Philippot, Ryohei Iwata, Aleksandra J. Recupero, Keimpe Wierda, Sergio Bernal Garcia, Luke Hammond, Anja van Benthem, Ridha Limame, Martyna Ditkowska, Sofie Beckers, Vaiva Gaspariunaite, Eugénie Peze-Heidsieck, Daan Remans, Cécile Charrier, Tom Theys, Franck Polleux, Pierre Vanderhaeghen (2024) Synaptic neoteny of human cortical neurons requires species-specific balancing of SRGAP2-SYNGAP1 cross-inhibition. Neuron. https://doi.org/10.1016/j.neuron.2024年08月02日1.
Main development environment:
- mamba create -n respandev python=3.9 scikit-image pandas "numpy=1.23.4" nibabel pyinstaller ipython pyyaml pynvml numba dask dask-image ome-zarr zarr memory_profiler trimesh -c conda-forge -c nvidia -y
- conda activate respandev3
- pip install "scipy==1.13.1" "tensorflow<2.11" csbdeep pyqt5 "cupy-cuda11x==13.2.0" "patchify==0.2.3
Secondary environment:
- mamba create -n respaninternal python=3.9 pytorch torchvision pytorch-cuda=12.1 scikit-image opencv -c pytorch -c nvidia -y
- git clone -b v2.3.1 https://github.com/MIC-DKFZ/nnUNet.git
- cd to that repo dir then pip install -e ./nnUNet
- Our latest model uses 3D spine cores and membranes to further improve accuracy in dense environments
- Integration of Dask to remove resource limitations on processing large datasets
- Improved efficiency in batch GPU mesh measurements, neck generation, and geodesic distance measurements
| System | CPU | RAM (GB) | GPU | Storage | CARE Training (10 epochs, min) |
SelfNet Training (×ばつ10MB, 40 epochs, min) |
nnUNet (min) 10MB |
100MB | 500MB | 1GB | 2.5GB | RESPAN (min, GPU) 10MB |
100MB | 500MB | 1GB | 2.5GB |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Mid-performance | i9-11900K (8-core, 3.5 GHz) | 64 | RTX 3070 (8GB) | Patriot M.2 P300 1TB | 11.7 | 5 | 0.14 | 1.39 | 6.35 | 16 | 32.43 | 0.44 | 1.62 | 6.28 | 7.76 | 18.23 |
| High-performance | Threadripper PRO (16-core, 4.0 GHz) | 256 | RTX 4090 (24GB) | Samsung M.2 SSD 1.92TB | 3.5 | 1.5 | 0.14 | 1.39 | 6.35 | 14 | 32.43 | 0.26 | 2.33 | 8.91 | 14.07 | 26.62 |