Blog

My Family’s Experiences with the Vive

A few days ago, I had the opportunity to have my parents, my sister, and my brother-in-law visit me for my graduation. During the few days they were visiting, I demoed my Vive to them and they absolutely loved it.

It was a relatively small set of titles that I had them try, but they were good choices as they were all very “demo-able” for a small group:

I had my monitor set up so they could see what the Vive-user was seeing, and I set up a cheap speaker to mirror the audio. Because we wanted to be able to hear each other while in VR, we tended to skip using the earbuds at all (the demos we did didn’t have much necessary audio).

The SteamVR tutorial is a great way to get everyone accustomed to the controls, but it’s a little repetitive to do individually for a group of multiple people. What I ended up doing — which I think worked out well — is to run through the SteamVR tutorial myself while the others can see the screen and my actions. We had to explain a few of the controls along the way as new people tried on the Vive, but it worked out generally well.

Here is a collection of various reactions and insights from watching my family try out these demos:

The Lab:

Definitely the best demo to start people on. Everything is just polished and well-designed in terms of user interaction. Some of my family had a little trouble with the teleport mechanic for a few minutes, but nothing too serious.

The Aperture-Science-style humor definitely fits my family. They were cackling with glee as they mowed down waves of cartoon enemies in Longbow.

It surprised me just how popular the robot dog was with my sister. I think she spent a good 10-15 minutes playing fetch and petting it! And that one giant eyeball is the cutest thing when the dog is enjoying being petted.

Universe Sandbox:

My dad was the main one who tried this. It looked like he got a great sense of awe when looking at scenarios with lots of moving objects (like Saturn and its many moons).

This one still has some UI clunkiness and was harder to use, especially in how the tools are selected or manipulated. The grab/scale movement controls with the grip seemed to work well, though.

Tiltbrush:

This was extremely popular with my family, especially as a conversation topic to ponder the pontential of VR. My dad, who is an artist/illustrator ( you can see his work at http://jonandersenart.com/work/), was just fascinated with the potential of drawing in 3D, especially for architects. He loved how he could draw a building around himself, and then rescale it. As an artist, he spent a lot of time testing out each of the brushes (with lots of undo/redo) to see what he could accomplish in Tiltbrush. It was good that there was the straightedge tool, but we all would like to see more constraint-based tools like a CAD program.

We also did some neat collaborative drawing. For example, my dad would draw some foundations/outlines of a building, then hand it to me to draw some more details on the building. Then I would hand it off to my sister (who is a horticulturist who does landscape/garden work for museums) to create some well-designed gardens surrounding the virtual building.

One thing I thought was funny was how my sister, as soon as she saw she had a “fire” brush, drew a little fire with wooden logs, then sat next to it and “enjoyed the heat.” A direct mirror of that one bit from the Vive trailer.

I noticed that this was one app where my dad chose to walk along the perimeter of the play area when putting on the headset, just to establish the “safe” bounds where he could safely walk. Perhaps it was a result of the “empty landscape” that Tiltbrush has a background? Or perhaps it was more like getting a feel for his 3D “canvas.”

Sadly, this is the only one where I got a (very short) video recording of:

Destinations:

Not much to say about this one except that they thought it was neat. We went to the English churchyard, Mars, Valve HQ, and several other locations. I think what we enjoyed about this one was talking about the process of photogrammetry and the technical details of how one actually captures such scenes.

Google Earth VR:

This was the one that captured the most attention of my family. What I think is interesting is that we had very little interest in visiting “famous” landmarks or new areas around the world. Instead, we all wanted to find and show each other areas we had been to before, to relive old memories. We showed each other where we live(d) and work(d), or travel destinations that some of us had been to before. For example, my dad showed us all the cemetery of Staglieno in Genoa, which was a highlight of his travels to Italy when he was younger.

While the resolution you can get in Google Earth VR is somewhat disappointing at human-scale, it works fine for large, monumental areas. In particular, we spent a lot of time going to mountaintops that we ourselves had ascended (like Grandeur Peak in Utah), and then tracing back down along the trail. Because most of the scenery was so far from where we were “standing,” we could still get some wonderful views of the Salt Lake Valley that resembled what we remembered such views looking like.

One feature that Google Earth VR needs is a way to search for locations by typing in with the virtual keyboard. However, part of the fun was in slowly navigating and trying to find a place by visual landmark. We also had some troubles with a slow internet connection, so the app really needs more pre-caching ability.

Conclusion:

Overall, what indicated how much my family was into the Vive was how it became a thing that we ended up doing at some point every day they were visiting. I had initially thought it would be something that would make a fun evening on the first day and then we would just do other things in town. But we made time every day for VR and had a blast.

I don’t think anyone had any issues with motion sickness, and we only had one time where a controller hit a wall (with not much force, as the user was just attempting to point at something). We initially had had someone holding the cable for whomever was using the Vive, but we quickly found that it wasn’t necessary and that everyone could get themselves untangled when needed. The Vive is a very solid VR system and it’s reinforced just how important and valuable room-scale and hand-tracking are in VR.

HoloLens Unity: Rendering different imagery to each eye

As part of my research work using the Microsoft HoloLens, I recently had to find a way to render different imagery to each eye (for example, running a shader with a different texture for geometry depending on whether I was viewing it with the left or right eye).

Initially, I had tried the approach discussed by davidlively in the Unity forums, where we take advantage of the fact that OnPreRender() is called twice every frame when rendering in stereo. In this approach, we keep track of a variable for the active eye (0 for left, 1 for right), and alternate between those values (and update a flag in a shader) on each call of OnPreRender(). However, while this is said to work for the Oculus Rift, the HoloLens is different (some pre-optimization?) and this approach had no effect.

Thanks to huntertank on the HoloLens forum for mentioning an approach that seems to work: in Unity, create two separate Camera objects, and make sure they are placed on separate layers. In the Camera settings in the Inspector, set “Target Eye” to Left for one eye, and Right for the other. Attach a script with an OnPreRender() function to each Camera, and in that OnPreRender() function set the uniform value in your shader to left or right.

The result of this is that I can, for example, render scene geometry in red in one eye and render it in blue in the other eye.

 

HoloLens Unity: Disabling forced “tracking loss” popup in Unity Personal Edition

I recently started using the Microsoft HoloLens for some graphics research work. As part of this, I needed to capture imagery of the display by placing a camera behind the display and recording while the HoloLens was in a dark environment (the typical mixed-reality capture was unsuitable for my needs).

However, because the HoloLens uses the RGB cameras as part of its spatial mapping features, placing the HoloLens into a dark environment causes its spatial tracking to fail. This would not be an issue for my immediate research needs, except that when spatial tracking fails in a Unity HoloLens application, the application pauses and displays a “Trying to map your surroundings…” splash screen until spatial mapping is possible again (i.e. until the lights are turned back on).

Normally, to suppress this splash screen, one would go into Edit -> Project Settings -> Player, and uncheck the “On Tracking Loss Pause and Show Image” checkbox (as described here).

However, because I am using a Unity Personal license, this checkbox was disabled. This is a consequence of the fact that all splash-screen-related settings are disabled in Unity Personal, and this developer build of Unity had placed this rather important setting into that same category.

Thankfully, I was able to find a find a workaround that allowed me to suppress that splash screen despite tracking loss.

Under [project directory]\ProjectSettings\ProjectSettings.asset, find the line that says:

m_HolographicPauseOnTrackingLoss: 1

And change it to:

m_HolographicPauseOnTrackingLoss: 0

I hope that in future versions of the Unity HoloLens build, this setting is made editable in the editor even for Unity Personal users. The spirit and rationale of the splash-screen limitation shouldn’t extend to a feature that is a vital piece of control in a HoloLens application.

“A Hand-Held, Self-Contained Simulated Transparent Display” to be presented at ISMAR 2016 (poster)

p3_2_autotone_brighter

Next week (September 19-13, 2016), I will be at the 15th IEEE International Symposium on Mixed and Augmented Reality (ISMAR 2016) in Merida, Mexico. At ISMAR I have a poster paper titled “A Hand-Held, Self-Contained Simulated Transparent Display”:

Hand-held transparent displays are important infrastructure for augmented reality applications. Truly transparent displays are not yet feasible in hand-held form, and a promising alternative is to simulate transparency by displaying the image the user would see if the display were not there. Previous simulated transparent displays have important limitations, such as being tethered to auxiliary workstations, requiring the user to wear obtrusive head-tracking devices, or lacking the depth acquisition support that is needed for an accurate transparency effect for close-range scenes.

We describe a general simulated transparent display and three prototype implementations (P1, P2, and P3), which take advantage of emerging mobile devices and accessories. P1 uses an off-the-shelf smartphone with built-in head-tracking support; P1 is compact and suitable for outdoor scenes, providing an accurate transparency effect for scene distances greater than 6m. P2 uses a tablet with a built-in depth camera; P2 is compact and suitable for short-distance indoor scenes, but the user has to hold the display in a fixed position. P3 uses a conventional tablet enhanced with on-board depth acquisition and head tracking accessories; P3 compensates for user head motion and provides accurate transparency even for close-range scenes. The prototypes are hand-held and self-contained, without the need of auxiliary workstations for computation.

NSF GRFP Essays Uploaded

In 2015, I was awarded a fellowship from the NSF GRFP (National Science Foundation Graduate Research Fellowship). I recommend any grad student apply if they are eligible for it, even if they are unsure of their chances. The process of writing essays about yourself and your research plans is a great opportunity to organize your thoughts at the beginning of grad school.

My essays (Personal Statement and Research Proposal) are available on my website, and also at Alex Lang’s wonderful page of tips about how to do well when applying for fellowships:

NSF GRFP – Personal Statement – Dan Andersen

NSF GRFP – Research Proposal – Dan Andersen

Building ffmpeg for Android using NDK in VirtualBox Ubuntu installation

This is a quick note about an issue I encountered when doing some Android development work that involved building ffmpeg from source.

My main dev machine uses Windows, and so to build the ffmpeg libraries for use in a video streaming application, I installed a VirtualBox machine with Ubuntu on it.

I was following this guide on how to set up ffmpeg using the Android NDK. However, on the “./configure” step, I kept getting an error saying: “No working C compiler found.”

Investigation of the config.log file generated by the script revealed the following error:

“Fatal error: invalid -march= option: `armv5te'”

Some Googling suggests that this is a result of missing symlinks for the assembler program, and that setting up the symlinks would tell the compiler which assembler to use.

However, when attempting to create the symlink, I got an error that it was not possible due to being a “Read-only file system.”

It turns out that when running VirtualBox with Windows as the host OS, symlinks aren’t possible in shared folders for security reasons. See this link.

So the simple solution, found after a few hours of learning to understand what was going on, was just to set up the NDK in a directory that was not a VirtualBox shared folder. After that, compilation appeared to work just find.

Slides for 2015 Eskenazi Trauma Symposium Presentation — “STAR: Using Augmented Reality Transparent Displays for Surgical Telementoring”

Andersen, Daniel. “STAR: Using Augmented Reality Transparent Displays for Surgical Telementoring.” Eskenazi Health 22nd Annual Trauma & Surgical Critical Care Symposium. Indianapolis, IN. 16 Oct 2015. Conference Presentation.

Purdue CS department article about my NSF fellowship

Daniel Andersen

Earlier this year, I received a fellowship from the National Science Foundation’s Graduate Research Fellowship Program (NSF GRFP). It’s a five-year fellowship, providing three years of financial support. This week, Purdue University’s computer science department did a short article about my fellowship:

CS Grad Student Selected for NSF Fellowship