The Basset Collection is a series of high-resolution images of human anatomy captured on paired slides, allowing them to be viewed in stereoscopic 3D. They are the result of a seventeen-year collaboration between Dr David Lee Bassett, an expert anatomist, and William Gruber, inventor of a stereoscopic imagery system, and which were finally completed in 1962.
One of the exciting applications of these Basset slides is the possibility of using them in virtual reality with Google Cardboard. As existing research demonstrated, viewing the images in 3D can allow users to develop a grasp of anatomy that is superior to simply viewing the same images in 2D. Needless to say, this can be particularly beneficial for trainees beginning their studies in the healthcare field, especially if they had minimal prior anatomy exposure.
McMaster University in Hamilton, Canada has been at the forefront of utilizing the Basset Slide Collection in virtual reality. Their app “VRBR (Virtual Reality Bellringer)” used Google Cardboard headsets to view the Basset slides, and was tested with anatomy undergrads and medical students. VRBR improved on the Bassett slides collection by adding labels and annotations to the images.
Virtual Vesalius will continue to improve upon this work through the use of medical imaging that can be viewed stereoscopically with volume rendering, allowing the anatomy to be viewed from any angle and helping students to develop better understanding of the anatomical relations between structures.
Some of our most complex work yet! This particular project is still in the development phase. Come back soon for more updates and a demo!
The problem
Research at the intersection of MR applications and anatomy education has routinely demonstrated a role for new MR-based modalities in improving anatomy education, but most MR apps rely on custom illustrated projections of 3D-models into user and screen space. These virtual assets are subject to device-intrinsic or developer-based differences in display fidelity and specimen art quality. An intrinsic barrier is present in the development of digital 3D models, which are not trivial to create. Existing evidence also suggests that virtual models may produce inferior results for learning in some use-cases compared to existing physical models. Such evidence forms a case to instead place emphasis on improving the experience of using existing physical models.
Our proposal
We are building a mixed-reality (MR) educational smartphone app intended to improve the experience of using physical 3D models in classrooms by identifying and labeling various anatomical features on models, built on a deep-learning based computer vision framework.
Point the camera at an existing anatomical model, and let the phone label key features on the model for you.
Stereopsis refers to the ability to perceive depth, stemming specifically from the disparity and integration of visual information from two laterally located eyes. Each eye receives a slightly different angle on objects in the field-of-view, causing horizontal or binocular disparities, which when integrated together, produce an understanding of environmental depth to supplement other visual factors such as motion parallax, contrast, overlap, and more. In anatomy education, spatial awareness and understanding of the distances and relationships between structures is paramount.
Creating stereoscopic 3D anatomy learning experiences: an artistic and technical reference
The following document characterizes methods and considerations, both artistic and technical, when creating a stereoscopic-3D anatomy learning experience. However, it is generally also suited to any type of close-subject stereoscopic media. All software used and referenced within is free and/or open-source, with the exception of some proprietary color-correction programs (which are fully optional).
An .stl model mount for dual-smartphone stereo filming setup
In an effort to not only democratize the distribution of stereoscopic anatomy learning experiences, but also their production, we have developed a 3D-printable, standard-tripod mountable stand that takes two smartphones (not necessarily of the same model) and positions them with adjustable inter-camera distance to film a subject.
Spaces are included for wired phone setups (so that the phones can be controlled from an external computer connection). More details on such a setup are included in the attached whitepaper pdf file.
ViVRe is a Unity application designed for the creation and delivery of volume-rendered VR learning objects, sourced from 3D medical imaging data.
This repository allows a user to build an app that acts as a “volume renderer,” software that renders a CT Scan or MRI in 3D, on android phones. Once built, the user can put their phone in a Google Cardboard 3D viewer and “slice” and rotate the model as desired. This is open source and be used by any institution that wishes to inexpensively allow students to study three-dimensional radiographs and complex anatomy slices.
It’s a basic but powerful tool for understanding the inside of CT and MRI volumes in stereoscopic 3D.