Picture yourself performing a procedure. Rather than having to look away from the patient to see imaging displayed on a separate, 2D screen, you can see the information from the CT scan in three dimensions, overlaid on the patient’s anatomy. Or consider your experience as a medical student. How much richer would your learning experience have been (or be) if you could view human anatomy immersively, in three dimensions? These exciting scenarios are steadily becoming the new reality—augmented and virtual reality, that is.
First developed for video gaming, virtual reality (VR) provides the ability to see in three dimensions and 360 degrees by means of a headset—a totally immersive experience. Augmented reality (AR), which isn’t as immersive as VR, provides the ability to see information in 3D without a head-mounted display. AR carries less incidence of cybersickness, a condition that can affect up to 30 percent of the population.
A number of companies are betting on the future of VR and its seemingly limitless possibilities:
- Facebook purchased headset startup Oculus VR for $2 billion in 2014.
- In 2016, Microsoft introduced HoloLens, which it describes as “mixed reality.” Holographic images are displayed in the center frame of a headset, preserving the user’s peripheral vision. Microsoft calls HoloLens a “totally untethered, fully contained holographic computer.”
- Although Google Glass was not successful in the consumer market, last fall the company announced plans for Glass Enterprise Edition, designed specifically for the workplace.
Researchers around the world are working on how to apply these technologies to medicine in general and to IR in particular, with potential for medical education, procedure planning and even in performing procedures. One place where this exploration is being carried out is the Maryland Blended Reality Center (MBRC) at the University of Maryland in College Park. The MBRC brings the university’s visual computing experts together with emergency medicine professionals at the R Adams Cowley Shock Trauma Center in Baltimore to advance visual computing for health care. Research and development projects include military medicine, telemedicine, simulation and readiness training, critical care patient diagnostics, and non-opioid pain management, as well as human anatomy education. Much of this work takes place in the futuristic “Augmentarium,” a facility that includes projection displays, AR visors, GPU clusters, human vision and human-computer interaction technologies.
In a recent live demonstration for members of the press, Sarah Murthi, MD, associate professor of surgery at the UMD School of Medicine and trauma surgeon at the Cowley Shock Trauma Center, performed an ultrasound examination of a patient’s heart using AR and the Microsoft HoloLens.
“While imaging has radically evolved, how images are displayed is basically the same as it was in 1950,” Dr. Murthi says. “Visual data are always shown on a 2D flat screen, on displays that force health care providers to look away from the patient, and even away from their own hands while performing procedures.” Dr. Murthi also notes that the images are not displayed from the perspective of the viewer, but rather from that of the imaging device. “AR’s potential ability to concurrently display imaging data and other patient information could save lives and decrease medical errors.”
Potential applications
Several of the posters and presentations at the SIR 2018 Annual Scientific Meeting demonstrate the intense interest in adapting AR and VR for use in interventional radiology, including as medical simulation training for students, for patient education, and for planning and conducting procedures. Vikash Gupta, MD, a resident at the University of Maryland Medical Center, provided an overview of the many possibilities in a poster titled, “Virtual and AR in the interventional radiology suite: What the interventionalist should know.”
“Technology has evolved so much during residency,” he says. “We are trying to see whether AR and VR have the same promise of any new technology: to make procedures easier and more efficient.” He sees potential in many areas, including preoperative planning and patient education. An image on his poster demonstrates the “limitless” potential of AR displays, showing six studies displayed simultaneously using Microsoft HoloLens.
Planning procedures. AR can be useful in planning treatment for splenic artery aneurysms and increasing physician confidence in the procedure, as demonstrated by Nishita Kothary, MD, FSIR, and Zlatko Devcic, MD, of Stanford University. “These are some of the most challenging cases in IR,” Dr. Kothary says. She and her colleagues performed a retrospective study using old volumetric data and reconstructed it using True 3D software from EchoPixel. They then queried IRs about their confidence in identifying the inflow/outflow arteries; increased confidence was reported in 93 percent of cases. Among IRs with only one year of experience, confidence increased in 100 percent of cases.
"VR provides much better spatial orientation and a more intuitive understanding of the patient's anatomy. The ability to rotate images and zoom in gives operators a greater appreciation for each patient's unique anatomy," says Dr. Devcic.
“It’s like GPS on steroids,” Dr. Kothary says. “These reconstructed data sets provided us with solid information. When you have a good understanding of the target to relevant arterial structures, everything goes much more smoothly.”
A similar study was conducted at the University of Pennsylvania using Microsoft HoloLens and custom software. Brian Park, MD, MS, and Terence Gade, MD, PhD, implemented 3D holographic volume-rendering workflows using both surface- and direct volume-rendering techniques. Working at the Penn Image-guided Interventions (PIGI) Lab, they assessed the techniques in lung tumor ablation, liver abscess drainage, pre-Y-90 radioembolization and foot tumor ablation. Among their most significant findings was that time to render the direct volume, or 3D, images was 1–2 minutes, compared with up to 45 minutes for surface rendering, though DVR is more computationally intensive. Every faculty member who participated reported enhanced confidence in procedural anatomy.
“The models are accurate to scale with the ability to rotate, translate and magnify holograms in real time,” Dr. Park points out. “The dynamic cutting plane allows virtual dissection and exploration of internal volume contents from any angle.” He also notes that virtual needle tracks can be placed to visualize the optimal approach. Full implementation of the technology could help to optimize the procedural approach and avoid potential complications.
Performing procedures. Dr. Gade, co-founder of the PIGI Lab, and his team are investigating how these technologies can be integrated into clinical workflows in meaningful ways. It’s critical that providers feel comfortable with the technology and see the potential benefit, he says. “We must answer questions like, Does it offer something clinicians don’t have? and Can they use it without it interrupting their work?”
AR offers the possibility of 3D, co-registered projections that could represent a new way of performing image-guided procedures. Co-registration takes previous medical imaging and links it with real-time images. Dr. Gupta and his colleagues have developed a prototype that allows them to see the trajectory of their devices in 3D. “It could represent a whole new way to perform our jobs,” he says.
That may still be a long way off, says Bradford Wood, MD, IR chief at the National Institutes of Health (NIH) Clinical Center. “There are many techniques searching for a home—a true clinical need—and there are a lot of hurdles on the way. It’s one thing to do planning and simulation, but the real value is yet to come. The ability to superimpose images during interventions and get feedback that allows iterative modification is the goal.” These applications are especially challenging because organs and targets move during procedures, says his colleague, Sheng Xu, PhD, of the NIH Clinical Center Radiology and Imaging Sciences Department.
Information becomes old in a nanosecond, Dr. Wood notes, and must be updated in real time. There’s additional challenge in keeping track of the physician’s eyes and head. “Gaming developments allow us to have this conversation,” he says.
Student and patient education. Dr. Gupta describes medical education as the low-hanging fruit of these technologies. “IR can be daunting for beginners,” he says. “Further, there is no good way to set up a cost-effective simulation environment. Using VR to demonstrate these procedures could bring the barriers way down.”
Further, AR scenarios can facilitate training for rare occurrences otherwise only encountered over a lifetime of learning, notes Barbara Brawn-Cinani, associate director of the Maryland Blended Reality Center.
Using these technologies to engage learners is a passion for Ziv J Haskal, MD, FSIR, editor-in-chief of the Journal of Vascular and Interventional Radiology (JVIR) and professor of radiology at the University of Virginia School of Medicine. Inspired by an early video on the New York Times VR app, shot in a buffalo field at Yellowstone National Park, Dr. Haskal quickly realized the potential. “It was clear that that kind of immersive experience was going to be the beginning of something dramatic, even at that simplest VR level,” he says. “It also makes people feel capable and removes barriers to adoption.”
Using funding from an annual innovation grant provided by JVIR publisher Elsevier, Dr. Haskal began to explore the possibilities of AR and VR for education. “I wanted to fire a heavy rocket, beyond the consumer-level camera, play with the ergonomics and the floating in of objects, and demonstrate the extent of the capabilities of these technologies,” he says. He determined to produce a series of educational videos using VR.
The shoot took months of planning and storyboarding, as the team experimented with placement of cameras and people, point-of-view perspective, where to position screens and many other considerations. “You must understand what the viewer wants to see and incorporate elements of filmmaking and production,” Dr. Haskal says. It was a professional shoot, using two cameras, one full-perspective and one point-of-view, as well as two additional sources of input from the inset screens. The resulting videos are extremely high-resolution, with 360-degree perspective. “They provide a training and educational experience that switches on a different aspect of availability and aspect in the viewer’s mind and causes them to become enveloped, resident in the space,” Dr. Haskal says. “They mimic apprenticeship training.”
The choice of procedure was an important consideration as well, he says. He decided on transjugular intrahepatic portosystemic shunt (TIPS). “We took a procedure that is well-established but difficult and anxiety-producing. We used the videos to teach it methodically, step by step. The VR brings extra value,” he says. The procedure took just under an hour; the video was divided into ten segments of approximately four minutes each. Although somewhat long for VR, people do watch them in their entirety, Dr. Haskal says: “That means they are engaged with the content.”
He showed the videos for the first time during his “Extreme IR” session at SIR 2018, distributing to attendees JVIR-branded VR viewers. The full collection of videos can be viewed at bit.ly/2KzvWH7 (see sidebar for viewing instructions).
Dr. Haskal believes VR videos also have potential for patient education. “Patients don’t really know what will happen once we roll them down the hallway into the room,” he says. “Such immersive videos could help considerably.”
Challenges
There are major challenges to widespread adoption of AR and VR in interventional radiology. “High-quality headsets are very expensive, and that cost can hinder innovation,” says Dr. Gupta. Although the cost of Oculus headsets has come down in recent months, Microsoft HoloLens costs about $3,000. EchoPixel charges $25,000 per year for a subscription to its software.
The logistics of organizing his shoot were daunting, says Dr. Haskal, who reports experiencing tremendous personal stress around identifying suitable patients. (But the patient who eventually was filmed was very enthusiastic about participating, he adds.) Dr. Haskal also had to find a partner company for hardware and software, obtain the necessary approvals to use the IR suite for two days, arrange a high-definition feed with the fluoro units, conduct conversations and get approvals from anesthesia, as well as perform 20–25 hours of postproduction work.
Future
Dr. Kothary believes that AR and VR will be combined with cone-beam CT images to keep making navigation simpler and easier, as well as to decrease radiation and contrast use.
The next step in Dr. Wood’s work is interfacing with fused images in a semi-AR environment via a “smart needle.” “This has potential for proving you have finished an ablation and gotten every last cell,” he says. He and Dr. Xu have developed an app that uses a smartphone’s camera and gyroscope to optimize needle-insertion angle for planning and performing percutaneous CT-guided biopsies and ablations, which was described at SIR 2017.
Dr. Haskal has his sights on the future of medical education and has scripted more videos for a new 17K stereoscopic VR camera, which provides the highest possible quality. “Simulators are large, inefficient and impractical,” he says. “We can use VR to teach every medical student, who can demonstrate benchmarks for increased training confidence on a new, complex device at the time of need.” He has formed a small production group and will release future videos through the JVIR YouTube channel.
“We are excited for the future, where the use of AR in health care will be just as commonplace as use of a stethoscope,” says Dr. Murthi.
Viewing JVIR VR videos
To view the JVIR VR videos using a Google Cardboard-style viewer:
- If you have not already done so, download and install the YouTube app for your smartphone
- Using your smartphone, download a JVIR VR video at bit.ly/2KzvWH7
- Open video using the YouTube app
- When video is playing, touch VR viewer icon
- Insert phone into viewer, screen facing up, and close the back panel
- View video through the lenses, gently repositioning phone within viewer to ensure proper viewing alignment