Rodent Tools
A Toolkit for Analysis and Visualization of Preclinical Rodent Neuroimaging Experiments
Overview
This project will develop an open source suite of software for the processing, analysis, and visualization of rodent brain imaging data, which will enhance the ability of neuroscience researchers to perform preclinical studies on rodent models. This software will provide advanced capabilities for analyzing multimodal imaging data, including imagery from magnetic resonance imaging and optical microscopy studies, with the goal of improving understanding of processes and mechanisms underlying neurological diseases and disorders and their potential treatment. The software will include interactive processing to facilitate ease of use, as well as command line tools to enable large scale processing for larger studies.
Team
Principal Investigators: David Shattuck and Allan MacKenzie-Graham.
Co-Investigators: Anand Joshi, Daniel Tward, and Shantanu Joshi.
Staff and Students: Yeun Kim
Project Summary
Rodent models remain an important in preclinical studies of brain disease and disorders, as well as basic neuroscience investigations. Rodent imaging data, acquired through techniques including MRI and microscopy, play a critical role in many of these studies. While a great deal of effort has gone into the development of software tools for analyzing MRI of the brain, much of this work has been focused largely on human data, and investigators studying animal models of disease must often resort to adapting these tools for their research. In this project, we will address this need by developing a dedicated suite of open source software tools for processing, analyzing, and visualizing neuroimaging data acquired from rodent brains. These tools will operate on structural diffusion, and functional MRI, as well as optical microscopy of optically cleared serially sectioned tissue samples. These tools will build upon our decades of experience developing software for analyzing human and mouse imaging data, our experience in developing multimodal atlases of the mouse brain, and our active efforts in community engagement and dissemination while applying these resources in neuroscientific studies. Where suitable, we will make use of deep learning methods to produce powerful segmentation and registration networks trained on manually annotated and delineated data. We will also develop easy-to-use interfaces that will facilitate data processing and provide advanced visualization capabilities of datasets with sizes on the order of one terapixel.
The project has five specific aims. Aim 1 will develop MRI processing tools, which will include intrasubject co- registration of MRI modalities, extraction of brain tissue from whole head scans, tissue classification, and processing of diffusion and functional MRI data. Aim 2 will develop tools for processing microscopy of cleared and sectioned tissue, with the major goal of aligning these data to a reference atlas generated from either optical microscopy or MRI. These tools will perform cell counting in automatically segmented regions; axon following; dendritic arborization; and dendritic spine counting. In Aim 3, we will develop a statistical analysis toolbox, which will perform statistical inference for neuroimaging measures from microscopy and MRI data analyzed using methods from Aims 1 and 2. In Aim 4, we will integrate the components from Aims 1-3 into an informatics platform that will provide command line tools for easy scripting, interoperability with related imaging tools, and a graphical interface for visualizing data across different scales. In Aim 5, we will perform evaluation of our software tools using two studies: imaging an experimental autoimmune encephalomyelitis (EAE) mouse model of multiple sclerosis; and imaging a mouse model of normative aging. We will also work with a network of small animal imaging experts external to the project, who will use and evaluate the software. These driving projects will serve as testbeds to ensure the practical utility of the software in a research setting, providing direction for the development of our research platform. We will distribute the software freely under an open source license, and provide user support through our website.
Acknowledgements
This project is supported in part by NIH Grant R01 NS121761.
Licenses
GPLv2 only for most code.