Generated Spaces VR Prototype: 13 February 2015

Creating a virtual space from a void necessitates asking what rules from the physical world will be inherited and what rules will be broken. Many gallery spaces try to recede from the artwork to leave the viewer immersed in an aesthetic experience without distraction. However, the unusual nature of VR technology uncomfortably encompasses the visual field of the viewer that leads to the opposite problem: convincing the viewer that they are standing in a world they wish to explore.

There is no natural way to design for such an environment. Unlike a page or a screen with physical constraints, the virtual space is a tabula rasa. In this instance, we had to invert the paradigm of curating images for a space. The result was to generate a space from available images and their metadata.

We explored images radiating off and scattered into the distance to give the user complete freedom to wander. Unexpectedly, the experience was intimidating because the enveloping darkness was too oppressive. Some pathfinding structure was needed to prompt the user to explore. This is akin to an English garden designed with clear but meandering paths that slowly reveal natural beauty.

In our demo, we explored spiraling forms akin to the Guggenheim’s iconic ramp. The structure is defined by a path for the ground and images for walls. Branching spirals are determined by the specific medium of the image accessed through the image’s metadata. Both the image and its metadata were provided via the Rijks Museum API.

Looking Forward

Creating a satisfying experience is difficult when the medium is this new and the hardware remains limited. We look forward to what future iterations of this application could be and have highlighted a few critical areas.

Resolution

During development with an Oculus DK1 headset, we found the resolution to be the greatest limiting factor. The art-viewing experience becomes a back-and-forth dance as the user has to move uncomfortably close to an image to see any details. We look forward to Oculus’ improved consumer headset launching this year as it promises significantly improved screen resolution.

Sound

We experimented with drone and ambient sounds emanating from each artwork create a sense of aural space. While the sounds gave the the user a better idea of where they were in the virtual environment, it was too subtle for most viewers. In the next version, we will borrow an idea from gaming and adopt a virtual radio with different selections of music which might make the experience more accessible to a casual user.

Improved Branching

We are also exploring more complex branching patterns to create a less linear experience.

Improved Paths

Currently our paths are minimal and have no bounding structures. Early users found it frustrating to fall off the ramp.

Typography

We also found typesetting in space to be a disappointing experience both in concept and in development. The standard “3D Text” element support in Unity is very limited, and fonts must be converted in a way that limits the character set. Further investigation into Unity text objects and potential libraries will need to be made.

Multi-Viewer and Multi-Library

The real power of a generative application (especially in virtual reality) is the freedom of global access and the ability to feed it any data. An experience like a virtual gallery would be exponentially better if other users could participate in the same space and communicate about the work. We hope to add other libraries and APIs, such as ADL/Shared Shelf access, that would give the user a degree of control over the work selected.