Projects

Navigation for the Visually Impaired Using a Google Tango RGB-D Tablet

Starting in the spring of 2015, ss part of my work at Purdue’s Envision Center, I began creating a navigation system for people with visual disabilities, using the Google Project Tango tablet depth camera. In short, it builds a 3D representation of the user’s surroundings in real time and converts it to 3D audio for navigational cues.

Here is a video demonstrating the current system (as of May 2015). Please note that the audio is very quiet due to hasty recording.

More information can be found here.

STAR: System for Telementoring with Augmented Reality

Since May 2014, I’ve been working with an interdisciplinary team at Purdue University on a research project to use augmented reality to enhance surgical telementoring. With the System for Telementoring with Augmented Reality (STAR), we want to increase the mentor and trainee sense of co-presence through an augmented visual channel that will lead to measurable improvements in the trainee’s surgical performance.

Here is a video demonstrating the current system (as of April 2015):


You can find more information on our project page.


GPU-Accelerated 3D Ant Colony Simulation and Visualization

Demo image

Project available on GitHub

This is the final project I did during my first semester of graduate school, for Purdue University’s Fall 2014 CS535 (Interactive Computer Graphics) course.

Ant colony optimization algorithms are useful for pathfinding, such as in robotics and distributed networks. In this simulated scenario, a collection of ants can converge on common solutions to finding nearby food, despite each ant only having knowledge of its immediate surroundings. This particular quality makes ant colony simulation suitable for parallelization and GPGPU.

The basic setup of the simulation is that ants wander randomly from a central nest point, laying a pheromone trail behind them. They pick up food when they encounter it and return it to the nest. Other ants will follow existing trails, strengthening them over time.


cupola_screenshot

Cupola VR Viewer for Oculus Rift Head-Tracking in WebGL Virtual Environments

Get the Chrome packaged app here!

GitHub repository, including documentation and the Javascript client library you need to make your WebGL page work with Cupola

Cupola VR Viewer Google Chrome packaged app to make it easier and smoother to connect the Oculus Rift with browser-based VR environments on the Internet. Basically, you install the “Cupola VR Viewer” app, connect your Rift, and paste in the URL of a particular Cupola-supported VR webpage. The webpage needs to use the “cupola.js” Javascript library, which is available here.

In the app, I’ve provided links to a couple sample WebGL pages that support Cupola, that you can load in the Chrome app and get head-tracking working. You can also drag and drop the Oculus config files into the app to use your calibration data (still experimental, doesn’t persist on exit/restart).

My work here is similar to (and inspired by) vr.js and oculus-bridge, but with a couple of differences and improvements:

– vr.js is an awesome NPAPI plugin for Chrome and Firefox, but unfortunately Chrome is retiring NPAPI support.

– oculus-bridge uses a standalone application that interacts with the Oculus SDK, and then provides a WebSocket stream of orientation data that a website can connect to. However, WebSockets are kind of slow, and give about a 10-millisecond delay that I find noticeable and disorienting.

In contrast, Cupola VR Viewer uses Chrome’s USB API to get the raw sensor data from the Rift, and I’ve reimplemented parts of the Oculus SDK in Javascript to translate the sensor data into the orientation. I find that this approach provides lower latency than WebSockets, and is unencumbered by the loss of NPAPI plugins in Chrome.

If you’re interested in VR and the Rift when used with browser-based virtual environments, please check this out! I think that WebGL and three.js make it really easy to set up 3D environments and having a system like this will be really useful to the VR community.

Let me know (at cupolavr@gmail.com) if there are any questions, comments, feedback, bug reports, pull requests or anything like that. I really want to make something useful for all of you in the Rift community. Thanks!


University of Utah RASC-AL ROBO-OPS Team

The design for the "Mars rover." My team developed the software and camera/video interface.
The design for the “Mars rover.” My team developed the software and camera/video interface.

In Spring 2011, I developed a streaming camera system for the University of Utah Robotics Club (RoboUtes) as part of the 2011 RASC-AL ROBO-OPS competition. This competition, funded by NASA, had over a dozen teams creating “Mars rovers” that would traverse a desert landscape in Texas, picking up specially-colored rocks, while being remotely controlled by team members located at the home campus (in our case, in Utah).

As part of the software team for this project, I was responsible for developing a camera system that would run on a low-powered “BeagleBoard” single-board computer, taking raw image data from a series of Logitech cameras and wirelessly streaming the data to a client application running at our “mission control” on campus. It also included pan/tilt controls that a user could use to remotely manipulate the camera.

Finally, I did some image processing with OpenCV to analyze the image data as it came in to highlight rocks of a particular color, to help the team more quickly seek out targets for collection.

Our rover came in second place in the competition.

Leave a Reply

Your email address will not be published. Required fields are marked *