Kristin Dana
Professor
Rutgers University
ECE Department (Electrical and Computer Engineering)
Rutgers Computer Science Dept: Member of Graduate Faculty
ECE Vision and Robotics Lab, Director, SOCRATES
848-445-5253
kristin.dana at rutgers dot edu
Welcome
The ECE Vision and Robotics Laboratory, conducts innovative research at the intersection of computer vision and robotics, key branches of AI. The lab is developing new methods for computer vision, human-robotic interaction, precision agriculture, remote sensing, and computational photography.
Select Research Projects
AI and Vision in Agriculture
Agtech Framework for Cranberry-Ripening Analysis Using Vision Foundation Models
Johnson, Faith, Ryan Meegan, Jack Lowry, Peter Oudemans, and Kristin Dana. 2025. "Agtech Framework for Cranberry-Ripening Analysis Using Vision Foundation Models." Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 1207–1216.
This work presents a computer vision framework for analyzing the ripening process of cranberry crops using both aerial drone and ground-based imaging across a full growing season. By leveraging vision transformers (ViT) and UMAP dimensionality reduction, the framework enables interpretable visualizations of berry appearance and quantifies ripening trajectories. The approach supports precision agriculture tasks such as high-throughput phenotyping and crop variety comparison. This is the first visual framework for cranberry ripening assessment, with potential impact across other crops like wine grapes and olives.
[BibTeX] | [Project Page]Vision on the bog: Cranberry crop risk evaluation with deep learning
Akiva, Peri, Benjamin Planche, Aditi Roy, Peter Oudemans, and Kristin Dana. "Vision on the bog: Cranberry crop risk evaluation with deep learning." Computers and Electronics in Agriculture 203 (2022)
Vision-on-the-bog is framework for smart agriculture that enables real-time decision-making by monitoring cranberry crops. It performs instance segmentation to count sun-exposed cranberries at risk of overheating and predicts internal berry temperature using drone and sky imaging. A weakly supervised segmentation method reduces annotation effort, while a differentiable model jointly estimates solar irradiance and berry temperature. These tools support short-term risk assessment to inform irrigation decisions. The approach is validated over two growing seasons and can be extended to crops such as grapes, olives, and grain.
[BibTeX] | [Project Page]Visual Navigation
A Landmark-Aware Visual Navigation Dataset
Akiva, Peri, Benjamin Planche, Aditi Roy, Peter Oudemans, and Kristin Dana. "Vision on the bog: Cranberry crop risk evaluation with deep learning." Computers and Electronics in Agriculture 203 (2022)
This work introduces the Landmark-Aware Visual Navigation (LAVN) dataset to support supervised learning of human-centric exploration and map-building policies. The dataset includes RGB-D observations, human point-clicks for navigation waypoints, and annotated visual landmarks from both virtual and real-world environments. These annotations enable direct supervision for learning efficient exploration strategies and landmark-based mapping. LAVN spans diverse scenes and is publicly released with comprehensive documentation to facilitate research in visual navigation.
[BibTeX] | [Project Page]Feudal Networks for Visual Navigation
Johnson, Faith, Bryan Bo Cao, Ashwin Ashok, Shubham Jain, and Kristin Dana. "Feudal networks for visual navigation.", presented at Embodied AI Workshop EAI-CVPR2024, arXiv preprint arXiv:2402.12498 (2024)
Feudal Networks FigureThis work proposes a novel feudal learning approach to visual navigation that eliminates the need for reinforcement learning, metric maps, graphs, or odometry. The hierarchical architecture includes a high-level manager with a self-supervised memory proxy map and a mid-level manager with a waypoint network trained to mimic human navigation behaviors. Each level of the agent hierarchy operates at different spatial and temporal scales, enabling efficient, human-like exploration. The system is trained using a small set of teleoperation videos and achieves near state-of-the-art performance on image-goal navigation tasks in previously unseen environments.
[BibTeX] | [Project Page]