My Family’s Experiences with the Vive

A few days ago, I had the opportunity to have my parents, my sister, and my brother-in-law visit me for my graduation. During the few days they were visiting, I demoed my Vive to them and they absolutely loved it.

It was a relatively small set of titles that I had them try, but they were good choices as they were all very “demo-able” for a small group:

I had my monitor set up so they could see what the Vive-user was seeing, and I set up a cheap speaker to mirror the audio. Because we wanted to be able to hear each other while in VR, we tended to skip using the earbuds at all (the demos we did didn’t have much necessary audio).

The SteamVR tutorial is a great way to get everyone accustomed to the controls, but it’s a little repetitive to do individually for a group of multiple people. What I ended up doing — which I think worked out well — is to run through the SteamVR tutorial myself while the others can see the screen and my actions. We had to explain a few of the controls along the way as new people tried on the Vive, but it worked out generally well.

Here is a collection of various reactions and insights from watching my family try out these demos:

The Lab:

Definitely the best demo to start people on. Everything is just polished and well-designed in terms of user interaction. Some of my family had a little trouble with the teleport mechanic for a few minutes, but nothing too serious.

The Aperture-Science-style humor definitely fits my family. They were cackling with glee as they mowed down waves of cartoon enemies in Longbow.

It surprised me just how popular the robot dog was with my sister. I think she spent a good 10-15 minutes playing fetch and petting it! And that one giant eyeball is the cutest thing when the dog is enjoying being petted.

Universe Sandbox:

My dad was the main one who tried this. It looked like he got a great sense of awe when looking at scenarios with lots of moving objects (like Saturn and its many moons).

This one still has some UI clunkiness and was harder to use, especially in how the tools are selected or manipulated. The grab/scale movement controls with the grip seemed to work well, though.

Tiltbrush:

This was extremely popular with my family, especially as a conversation topic to ponder the pontential of VR. My dad, who is an artist/illustrator ( you can see his work at http://jonandersenart.com/work/), was just fascinated with the potential of drawing in 3D, especially for architects. He loved how he could draw a building around himself, and then rescale it. As an artist, he spent a lot of time testing out each of the brushes (with lots of undo/redo) to see what he could accomplish in Tiltbrush. It was good that there was the straightedge tool, but we all would like to see more constraint-based tools like a CAD program.

We also did some neat collaborative drawing. For example, my dad would draw some foundations/outlines of a building, then hand it to me to draw some more details on the building. Then I would hand it off to my sister (who is a horticulturist who does landscape/garden work for museums) to create some well-designed gardens surrounding the virtual building.

One thing I thought was funny was how my sister, as soon as she saw she had a “fire” brush, drew a little fire with wooden logs, then sat next to it and “enjoyed the heat.” A direct mirror of that one bit from the Vive trailer.

I noticed that this was one app where my dad chose to walk along the perimeter of the play area when putting on the headset, just to establish the “safe” bounds where he could safely walk. Perhaps it was a result of the “empty landscape” that Tiltbrush has a background? Or perhaps it was more like getting a feel for his 3D “canvas.”

Sadly, this is the only one where I got a (very short) video recording of:

Destinations:

Not much to say about this one except that they thought it was neat. We went to the English churchyard, Mars, Valve HQ, and several other locations. I think what we enjoyed about this one was talking about the process of photogrammetry and the technical details of how one actually captures such scenes.

Google Earth VR:

This was the one that captured the most attention of my family. What I think is interesting is that we had very little interest in visiting “famous” landmarks or new areas around the world. Instead, we all wanted to find and show each other areas we had been to before, to relive old memories. We showed each other where we live(d) and work(d), or travel destinations that some of us had been to before. For example, my dad showed us all the cemetery of Staglieno in Genoa, which was a highlight of his travels to Italy when he was younger.

While the resolution you can get in Google Earth VR is somewhat disappointing at human-scale, it works fine for large, monumental areas. In particular, we spent a lot of time going to mountaintops that we ourselves had ascended (like Grandeur Peak in Utah), and then tracing back down along the trail. Because most of the scenery was so far from where we were “standing,” we could still get some wonderful views of the Salt Lake Valley that resembled what we remembered such views looking like.

One feature that Google Earth VR needs is a way to search for locations by typing in with the virtual keyboard. However, part of the fun was in slowly navigating and trying to find a place by visual landmark. We also had some troubles with a slow internet connection, so the app really needs more pre-caching ability.

Conclusion:

Overall, what indicated how much my family was into the Vive was how it became a thing that we ended up doing at some point every day they were visiting. I had initially thought it would be something that would make a fun evening on the first day and then we would just do other things in town. But we made time every day for VR and had a blast.

I don’t think anyone had any issues with motion sickness, and we only had one time where a controller hit a wall (with not much force, as the user was just attempting to point at something). We initially had had someone holding the cable for whomever was using the Vive, but we quickly found that it wasn’t necessary and that everyone could get themselves untangled when needed. The Vive is a very solid VR system and it’s reinforced just how important and valuable room-scale and hand-tracking are in VR.

The Dream of a “VR Mall” is a Fantasy

There’s an old and tired mental picture that a lot of people have about VR and commerce, which is of the “virtual mall” — walking from virtual store to virtual store in some sort of Second Life 3D environment, going over to items and picking them up with gesture controls and trying them on and all that.

I think that this is precisely the wrong mental model to be thinking of.

GeoCities was built around the idea of websites inhabiting a place in cyberspace that was “near” other similar sites.

Back when people were starting to figure out how the Web worked, there was a lot of focus on “proximity” that mimicked the real world. Geocities was built on the idea of neighborhoods, each neighborhood related to a particular subject matter, and each site getting a numbered address. There was an idea of being “next” to other people’s webpages, and there was an ability to browse in the way that you would drive down a street looking at each building in sequence.

There were e-commerce approaches that would attempt to monetize webspace based on whether your site was positioned “next” to a popular site. This is similar to the idea of physical malls, where a few big stores drive traffic to the whole area and the smaller stores benefit from others browsing.

The reason why this existed at the time was that search engines had not advanced; there were maintained hierarchical directories of sites that were maintained manually. With modern search engines, we don’t have that same limitation. When we go onto Amazon to buy something, we aren’t clicking through hyperlinked image-maps of a store in some skeuomorphic e-commerce version of Microsoft Bob. We see text and quickly jump back and forth, in and out, open a bunch of tabs.

Microsoft Bob.

We teleport. We link to multiple places at once. We have gained something by doing shopping online that we don’t get in brick-and-mortar stores. When price comparing in real life, can you quickly hop between two stores? Do you find value in walking between different areas of the store instead of having everything a few clicks away?

Janus VR represents webpages as virtual spaces, and links as doorways.

I like systems like Janus VR because they acknowledge the modern Web-using person’s ability to jump around at will. However, I don’t see this as an effective replacement for our normal modes of online interaction, and I don’t think that VR commerce is going to look like this. Any effective attempt at using VR for shopping must enhance the experience, not try to mimic reality with all its disadvantages in an attempt at perfect fidelity.

For this reason, I absolutely dismiss the idea of the “VR store” in most cases. There may be some uses, like clothes shopping, where you want a virtual environment to be moving around in, but I think that getting that sort of body-scanning “fit” to where it’s usable and not troublesome and glitchy like a lot of augmented reality applications is more far off.

Now, I do think there is value in VR for shopping in the form of visualization of products. I recently saw some neat experimental work of using WebGL to show off products for Best Buy, such as a fridge and a washing machine.

Here’s how I predict that this can happen. First, over time, some companies will have special features for some products in their online stores, where you can click and see an in-browser WebGL visualization of the product, like the ones I linked above. This takes some development time, so we’re not going to see it in scale and done for every single product until 3D scanning becomes a lot cheaper and faster and better — and for many products there’s no additional benefit to seeing it in a 3D Web visualization (though for some it will be beneficial). Think of something like Sketchfab’s viewer, applied to websites that do 3D printing services. These are companies that already have the 3D assets to be able to show something — this is the main bottleneck.

An example of a simple WebGL scene (using three.js) being given WebVR support.

Next, as the work of the WebVR group begins to mature, VR integration into the browser (meaning the head-tracking and lens distortion) will be accessible via Javascript APIs, without needing to use special dev versions of the browser as you do now.

As the consumer version of the Rift and other VR devices becomes more widespread, due to gaming, there will be improvements to these previously-existing WebGL visualizations to basically add a “go into VR mode” button. There are already some nice boilerplate projects that make it easy to support WebVR in a WebGL project and enter and exit VR mode with a button press. This will initially be a promotional thing or a special feature for a small audience, but will eventually be built into the 3D Web visualization libraries as a default “VR mode support.”

So the model I’m seeing for VR support for shopping is definitely not “walking through a virtual mall and staring at kiosks” simulacrum of reality. And it’s also not “looking at Amazon search results in a bunch of floating 3D windows in VR.” I imagine that the bulk of VR-enabled shopping will be navigation using normal Web pages, some of which will have 3D real-time visualizations with “VR mode” enabled by default. In these cases, you’ll be able to press a button, put on your Rift, look at the object in question from different angles, maybe see it in a sample environment, get a sense of scale, and then take off the Rift and order it from the site.

Item-focused VR visualization is the probable approach, not space-focused VR stores. Otherwise, we would have all already transitioned to something like Second Life’s virtual malls by now.

Cupola VR Viewer Released!

cupola_screenshot

tl;dr:

Get the Cupola VR Viewer app here from the Chrome Web Store

GitHub repository, including documentation and the Javascript client library you need to make your WebGL page work with Cupola

I just finished the initial release of the open-source project I’ve been working on for the past month. It’s a Google Chrome packaged app to make it easier and smoother to connect the Oculus Rift with browser-based VR environments on the Internet. Basically, you install the “Cupola VR Viewer” app, connect your Rift, and paste in the URL of a particular Cupola-supported VR webpage. The webpage needs to use the “cupola.js” Javascript library, which is available here.

In the app, I’ve provided links to a couple sample WebGL pages that support Cupola, that you can load in the Chrome app and get head-tracking working. You can also drag and drop the Oculus config files into the app to use your calibration data (still experimental, doesn’t persist on exit/restart).

My work here is similar to (and inspired by) vr.js and oculus-bridge, but with a couple of differences and improvements:

– vr.js is an awesome NPAPI plugin for Chrome and Firefox, but unfortunately Chrome is retiring NPAPI support. In fact, Chrome 32 beta was just released, which is getting rid of NPAPI.

– oculus-bridge uses a standalone application that interacts with the Oculus SDK, and then provides a WebSocket stream of orientation data that a website can connect to. However, WebSockets are kind of slow, and give about a 10-millisecond delay that I find noticeable and disorienting.

In contrast, Cupola VR Viewer uses Chrome’s USB API to get the raw sensor data from the Rift, and I’ve reimplemented parts of the Oculus SDK in Javascript to translate the sensor data into the orientation. I find that this approach provides lower latency than WebSockets, and is unencumbered by the loss of NPAPI plugins in Chrome.

If you’re interested in VR and the Rift when used with browser-based virtual environments, please check this out! I think that WebGL and three.js make it really easy to set up 3D environments and having a system like this will be really useful to the VR community.

Let me know if there are any questions, comments, feedback, bug reports, pull requests or anything like that. I really want to make something useful for all of you in the Rift community. Thanks!