I am currently a final-year computer science Ph.D. student at Purdue University. My advisor is Voicu Popescu.
My research interest is the use of augmented and virtual reality for interactive guidance and task completion. First, I am interested in the applications of AR/VR for medical and educational applications, such as surgical telementoring. Second, I am interested in providing automated AR guidance to users completing acquisition-based tasks, such as 3D scanning of indoor environments to a desired level of coverage.
|AR HMD Guidance for Controlled Hand-Held 3D Acquisition
D. Andersen, P. Villano, V. Popescu. “AR HMD Guidance for Controlled Hand-Held 3D Acquisition.” Accepted to ISMAR 2019 and to appear in TVCG. Beijing, China, October 2019.
Photogrammetry is a popular method of 3D reconstruction that uses conventional photos as input. This method can achieve high quality reconstructions so long as the scene is densely acquired from multiple views with sufficient overlap between nearby images. However, it is challenging for a human operator to know during acquisition if sufficient coverage has been achieved. Insufficient coverage of the scene can result in holes, missing regions, or even a complete failure of reconstruction. These errors require manually repairing the model or returning to the scene to acquire additional views, which is time-consuming and often infeasible. We present a novel approach to photogrammetric acquisition that uses an AR HMD to predict a set of covering views and to interactively guide an operator to capture imagery from each view. The operator wears an AR HMD and uses a handheld camera rig that is tracked relative to the AR HMD with a fiducial marker. The AR HMD tracks its pose relative to the environment and automatically generates a coarse geometric model of the scene, which our approach analyzes at runtime to generate a set of human-reachable acquisition views covering the scene with consistent camera-to-scene distance and image overlap. The generated view locations are rendered to the operator on the AR HMD. Interactive visual feedback informs the operator how to align the camera to assume each suggested pose. When the camera is in range, an image is automatically captured. In this way, a set of images suitable for 3D reconstruction can be captured in a matter of minutes. In a user study, participants who were novices at photogrammetry were tasked with acquiring a challenging and complex scene either without guidance or with our AR HMD based guidance. Participants using our guidance achieved improved reconstructions without cases of reconstruction failure as in the control condition. Our AR HMD based approach is self-contained, portable, and provides specific acquisition guidance tailored to the geometry of the scene being captured.
||HMD-Guided Image-Based Modeling and Rendering of Indoor Scenes
D. Andersen, V. Popescu. “HMD-Guided Image-Based Modeling and Rendering of Indoor Scenes.” EuroVR 2018, London, UK, October 2018.
We present a system that enables a novice user to acquire a large indoor scene in minutes as a collection of images sufficient for five degrees-of-freedom virtual navigation by image morphing. The user walks through the scene wearing an augmented reality head-mounted display (AR HMD) enhanced with a panoramic video camera. The AR HMD shows a 2D grid of a dynamically generated floor plan, which guides the user to acquire a panorama from each grid cell. After acquisition, panoramas are preliminarily registered using the AR HMD tracking data, corresponding features are detected in pairs of neighboring panoramas, and the correspondences are used to refine panorama registration. The registered panoramas and their correspondences support rendering the scene interactively with any view direction and from any viewpoint on the acquisition plane. An HMD VR interface guides the user who optimizes visualization fidelity interactively, by aligning the viewpoint with one of the hundreds of acquisition locations evenly sampling the floor plane.
|An AR-Guided System for Fast Image-Based Modeling of Indoor Scenes
Andersen D, Popescu V. “An AR-Guided System for Fast Image-Based Modeling of Indoor Scenes.” IEEE VR 2018 (poster), Reutlingen, Germany, March 2018.
We present a system that enables a novice user to acquire a large indoor scene in minutes as a collection of images that are sufficient for five degrees-of-freedom virtual navigation by image morphing. The user walks through the scene wearing an augmented reality head-mounted display (AR HMD) enhanced with a panoramic video camera. The AR HMD visualizes a 2D grid partitioning of a dynamically generated floor plan, which guides the user to acquire a panorama from each grid cell. The panoramas are registered offline using both AR HMD tracking data and structure-from-motion tools. Feature correspondences are established between neighboring panoramas. The resulting panoramas and correspondences support interactive rendering via image morphing with any view direction and from any viewpoint on the acquisition plane.
|Augmented Visual Instruction for Surgical Practice and Training
Andersen D, Lin C, Popescu V, Rojas Muñoz E, Cabrera ME, Mullis B, Zarzaur B, Marley S, Wachs J. “Augmented Visual Instruction for Surgical Practice and Training.” VAR4Good 2018 – Virtual and Augmented Reality for Good (workshop paper), Reutlingen, Germany, March 2018.
This paper presents two positions about the use of augmented reality (AR) in healthcare scenarios, informed by the authors’ experience as an interdisciplinary team of academics and medical practicioners who have been researching, implementing, and validating an AR surgical telementoring system. First, AR has the potential to greatly improve the areas of surgical telementoring and of medical training on patient simulators. In austere environments, surgical telementoring that connects surgeons with remote experts can be enhanced with the use of AR annotations visualized directly in the surgeon’s field of view. Patient simulators can gain additional value for medical training by overlaying the current and future steps of procedures as AR imagery onto a physical simulator. Second, AR annotations for telementoring and for simulator-based training can be delivered either by video see-through tablet displays or by AR head-mounted displays (HMDs). The paper discusses the two AR approaches by looking at accuracy, depth perception, visualization continuity, visualization latency, and user encumbrance. Specific advantages and disadvantages to each approach mean that the choice of one display method or another must be carefully tailored to the healthcare application in which it is being used.
[ PDF ]
|Surgical Telementoring without Encumbrance: A Comparative Study of See-through Augmented Reality based Approaches
Edgar Rojas-Muñoz, Maria Eugenia Cabrera, Daniel Andersen, Voicu Popescu, Sherri Marley, Brian Mullis, Ben Zarzaur, Juan Wachs. “Surgical Telementoring without Encumbrance: A Comparative Study of See-through Augmented Reality based Approaches.” Annals of Surgery (2018), in press.
Objective: This study investigates the benefits of a surgical telementoring system based on an augmented reality head-mounted display (ARHMD) that overlays surgical instructions directly onto the surgeon’s view of the operating field, without workspace obstruction.
Summary Background Data: In conventional telestrator-based telementoring, the surgeon views annotations of the surgical field by shifting focus to a nearby monitor, which substantially increases cognitive load. As an alternative, tablets have been used between the surgeon and the patient to display instructions; however, tablets impose additional obstructions of surgeon’s motions.
Methods: Twenty medical students performed anatomical marking (Task1) and abdominal incision (Task2) on a patient simulator, in one of two telementoring conditions: ARHMD and telestrator. The dependent variables were placement error, number of focus shifts, and completion time. Furthermore, workspace efficiency was quantified as the number and duration of potential surgeon/tablet collisions avoided by the ARHMD.
Results: The ARHMD condition yielded smaller placement errors (Task1: 45%, P < 0.001; Task2: 14%, P = 0.01), fewer focus shifts (Task1: 93%, P < 0.001; Task2: 88%, P = 0.0039), and longer completion times (Task1: 31%, P < 0.001; Task2: 24%, P = 0.013). Furthermore, the ARHMD avoided potential tablet collisions (4.8 for 3.2s in Task1; 3.8 for 1.3s in Task2). Conclusion: The ARHMD system promises to improve accuracy and to eliminate focus shifts in surgical telementoring. Because ARHMD participants were able to refine their execution of instructions, task completion time increased. Unlike a tablet system, the ARHMD does not require modifying natural motions to avoid collisions.
|A Hand-Held, Self-Contained Simulated Transparent Display
Andersen D, Popescu V, Lin C, Cabrera ME, Shanghavi A, Wachs J. A Hand-Held, Self-Contained Simulated Transparent Display. ISMAR’16 – Proceedings of the IEEE International Symposium on Mixed and Augmented Reality (poster), Merida, Mexico, September 2016.
Hand-held transparent displays are important infrastructure for augmented reality applications. Truly transparent displays are not yet feasible in hand-held form, and a promising alternative is to simulate transparency by displaying the image the user would see if the display were not there. Previous simulated transparent displays have important limitations, such as being tethered to auxiliary workstations, requiring the user to wear obtrusive head-tracking devices, or lacking the depth acquisition support that is needed for an accurate transparency effect for close-range scenes.
We describe a general simulated transparent display and three prototype implementations (P1, P2, and P3), which take advantage of emerging mobile devices and accessories. P1 uses an off-the-shelf smartphone with built-in head-tracking support; P1 is compact and suitable for outdoor scenes, providing an accurate transparency effect for scene distances greater than 6m. P2 uses a tablet with a built-in depth camera; P2 is compact and suitable for short-distance indoor scenes, but the user has to hold the display in a fixed position. P3 uses a conventional tablet enhanced with on-board depth acquisition and head tracking accessories; P3 compensates for user head motion and provides accurate transparency even for close-range scenes. The prototypes are hand-held and self-contained, without the need of auxiliary workstations for computation.
|Medical telementoring using an augmented reality transparent display
Andersen D, Popescu V, Cabrera ME, Shanghavi A, Gomez G, Marley S, Mullis B, Wachs JP. Medical telementoring using an augmented reality transparent display. Surgery. 2016 Jun 30;159(6):1646-53.
Background: The goal of this study was to design and implement a novel surgical telementoring system called STAR (System for Telementoring with Augmented Reality) that uses a virtual transparent display to convey precise locations in the operating field to a trainee surgeon. This system was compared to a conventional system based on a telestrator for surgical instruction.
Methods: A telementoring system was developed and evaluated in a study which used a 1 x 2 between-subjects design with telementoring system, i.e. STAR or Conventional, as the independent variable. The participants in the study were 20 pre-medical or medical students who had no prior experience with telementoring. Each participant completed a task of port placement and a task of abdominal incision under telementoring using either the STAR or the Conventional system. The metrics used to test performance when using the system were placement error, number of focus shifts, and time to task completion.
Results: When compared to the Conventional system, participants using STAR completed the two tasks with less placement error (45% and 68%) and with fewer focus shifts (86% and 44%), but more slowly (19% for each task).
Conclusions: Using STAR resulted in decreased annotation placement error, fewer focus shifts, but greater times to task completion. STAR placed virtual annotations directly onto the trainee surgeon’s field of view of the operating field by conveying location with great accuracy; this technology helped to avoid shifts in focus, decreased depth perception, and enabled fine-tuning execution of the task to match telementored instruction, but led to greater times to task completion.
[ PDF ]
|Avoiding Focus Shifts in Surgical Telementoring Using an Augmented Reality Transparent Display
Andersen D, Popescu V, Cabrera ME, Shanghavi A, Gomez G, Marley S, Mullis B, Wachs J. Avoiding Focus Shifts in Surgical Telementoring Using an Augmented Reality Transparent Display. Medicine Meets Virtual Reality 22: NextMed/MMVR22. 2016 Apr 19;220:9.
Also presented at NextMed/MMVR22 2016.
Conventional surgical telementoring systems require the trainee to shift focus away from the operating field to a nearby monitor to receive mentor guidance. This paper presents the next generation of telementoring systems. Our system, STAR (System for Telementoring with Augmented Reality) avoids focus shifts by placing mentor annotations directly into the trainee’s field of view using augmented reality transparent display technology. This prototype was tested with pre-medical and medical students. Experiments were conducted where participants were asked to identify precise operating field locations communicated to them using either STAR or a conventional telementoring system. STAR was shown to improve accuracy and to reduce focus shifts. The initial STAR prototype only provides an approximate transparent display effect, without visual continuity between the display and the surrounding area. The current version of our transparent display provides visual continuity by showing the geometry and color of the operating field from the trainee’s viewpoint.
[ PDF ]
|Virtual annotations of the surgical field through an augmented reality transparent display
Andersen D, Popescu V, Cabrera ME, Shanghavi A, Gomez G, Marley S, Mullis B, Wachs J. Virtual annotations of the surgical field through an augmented reality transparent display. The Visual Computer. 2015 May 27:1-8.
Existing telestrator-based surgical telementoring systems require a trainee surgeon to shift focus frequently between the operating field and a nearby monitor to acquire and apply instructions from a remote mentor. We present a novel approach to surgical telementoring where annotations are superimposed directly onto the surgical field using an augmented reality (AR) simulated transparent display. We present our first steps towards realizing this vision, using two networked conventional tablets to allow a mentor to remotely annotate the operating field as seen by a trainee. Annotations are anchored to the surgical field as the trainee tablet moves and as the surgical field deforms or becomes occluded. The system is built exclusively from compact commodity-level components—all imaging and processing are performed on the two tablets.
|AR Guidance for Trauma Surgery in Austere Environments
Andersen D, Rojas-Muñoz E, Lin C, Cabrera ME, Popescu V, Marley S, Anderson K, Zarzaur B, Mullis B, Wachs J, “AR Guidance for Trauma Surgery in Austere Environments.” EuroVR 2018 (industrial track). London, UK. 22-23 Oct 2018. Presentation.
[ Abstract ]
|STAR – A System for Telementoring with Augmented Reality
Andersen D, Popescu V, Cabrera ME, Shanghavi A, Rojas Muñoz EJ, Mullis B, Marley S, Gomez G, Wachs JP, “STAR – A System for Telementoring with Augmented Reality.” IMSH 2016. San Diego, CA. 16-20 Jan 2016. Interactive Demo.
|STAR: Using Augmented Reality Transparent Displays for Surgical Telementoring
Andersen, D. “STAR: Using Augmented Reality Transparent Displays for Surgical Telementoring.” Eskenazi Health 22nd Annual Trauma & Surgical Critical Care Symposium. Indianapolis, IN. 16 Oct 2015. Conference Presentation.
[ Slides ]
- NSF GRFP Fellowship, 2015 (three years of funding)