VR Looking Inside

IconVR_blue_512.png

Get ready to engage with 3D, interactive cells! This collaborative Virtual Reality (VR) experience will increase students’ engagement and excitement for STEM learning, bring out their naturally inquisitive natures, and improve their outcomes with these subjects.

Platform

HTC Vive / Pro

Oculus Quest / Rift S

Valve Index

PICO Neo 2

Launch Date

Coming soon...

Areas of focus

Producer

Experience Design

UX Research

UI Design

UE4

Context

This project started as a mixed-reality experience in which students would sit on a table in the same space and use physical manipulatives (cards) to spawn organelles in the augmented world. Because of it being a co-located experience in which groups of people would collaborate for relatively long periods of time, we decided to pivot to a VR experience as soon as the COVID-19 pandemic started.

arclassroom.png

!COVID-19!

Goal

VR Looking Inside is a suite of Virtual Reality simulations for STEM learning in k12 classrooms. All of the simulations are designed with the intention to facilitate collaborative, constructivist learning experiences for small student groups. Allowing students to explore interactive, 3D models and simulations together provides them with the opportunity to make mistakes and come to a solution through their collaborative dialogue. My role included overseeing the production team and making sure we were delivering functional and polished simulations within the defined deadlines. I was the one putting the final product together and shipping it to the platforms where schools would download it from.

Design Process

Technology

The design process for the VR version of the simulations was a bit different from what's common. When working on the MR version of the simulations, we had already defined the users and scenarios, as well as the content that would be part of the simulations; and even though some of it had to be changed, most of it could be reused for the VR version without major modifications.

The biggest challenge was to adapt all the interactions to VR in a way that was user friendly and intuitive, especially because most of what we built for the MR simulations was designed around the constrains of the hardware. This is why the first step was to create a hardware comparison, to know the differences between the 2 and be able to make decisions for the current hardware (VR), while understanding why certain decision were made when designing for the previous hardware (MR).

MR (MIRA Prism)
VR (Oculus Quest)
3 DoF
6 DoF
1 Controller
2 Controllers (Potentially hand-tracking without controllers)
Camera can be accessed
Camera can't be accessed
Scene position and visualization depends on the tracking mat
Fully interactive scene of at least 2m x 1.5m with full freedom of movement
Physical manipulatives can be used (cards)
Physical manipulatives can't be used
The main environment is the real world
The environment is completely virtual

Design Considerations

Because VR had been around for a while when we started the project, most of the constraints and best practices related to the medium had widely been discussed by many researchers and designers in the community. Here are some of the main considerations we took into account when designing the simulations:

  1. Camera and world movement: one of the main best practices we kept in mind when designing any feature in the simulations was that both the camera and the environment as a whole should not be moved, rotated or scaled under any condition; doing so could induce motion sickness and discomfort to users. Any transformation of the camera should happen naturally when the user moves the headset.

  2. Viewing zones: when positioning the main elements of the simulation on the scene we kept them for most of the time within a horizontal field of view (FOV) of around 70°, and a vertical FOV of around 50° and below eye level. The only few times in which we are moving elements of the simulation higher than that level is to catch users attention in certain WOW moments.

  3. Height range and arm length: something we had to be aware of when designing the simulations is that they would be used by middle school students as well as adults, which means that both height and arm length should be something to really consider when designing interactions and placing elements in the world.

  4. Text readability: since every simulation depended on text at some point for certain information to be delivered to the users, it was important to make sure that the text was of the right size and at the right distance to be readable.

  5. Body position: we tried to keep users' arms at a natural and comfortable position close to the body in a 90 degree angle for most of the time to avoid the so called problem of gorilla arms, in which users quickly get tired from having the arms extended for long periods of time.

  6. Multiplayer experience: probably one of the most challenging constraints was that the simulations could be played either in single player mode or together collaborating with other people. This affected the both the design of the space and interactions to accommodate for all the different scenarios.


Something else to keep in mind was that we had a small team, which meant we had to be strategic about what areas to focus on without compromising the app as a whole.

Proposed Solutions

In order to compensate for some of the challenges mentioned on the previous section, these are some of the proposed solutions:

  1. Fading in/out transitions: to create smooth transitions between scenes or states we used fade in and fade out effects instead of camera pans. We discussed that if any camera pan was needed to help with the learning of certain concepts, we would use a world UI with a 2D animation playing on it.

  2. Floating menus: I created a floating menu system, that allowed users to spawn the menu from a watch on their wrist and then drag and drop it to wherever position they preferred, this system should solve the previously mentioned challenges of height, arm length, text readability and body position. Since the spawned menus are only visible to one self it also benefited the multiplayer experience, by keeping the scene clean with only the other players' avatars and your own menu that could be closed if not being actively used.

  3. Laser pointer: in combination with the floating menus, we tried to solve the mentioned challenges of body position and arm length by using laser pointers to interact with any elements in the scene. This method makes it so everyone can interact with anything for long periods of time because it allows for a natural body position while doing so. It also helps keeping the multiplayer experience clean, since the other option would be to interact directly with one's hands with the elements in the scene, which would mean a really cluttered interactive area if everyone was interacting at the same time.

  4. Interactive text: some other strategies we used to help with text readability would be to only show certain text when the user would make an action like hover over and element or spawn a menu. This helps with keeping the focus on the simulation itself by not having too much information at a time on the user's FOV.

To be strategic about the team size and the areas we should focus on, I divided the environment into 3 areas:
 

  • Interactive area: this area would completely change its structure/arrangement based on the selected simulation. This was because each simulation would have different needs in terms of disposition of the elements and interactions, so this area would need to be adapted to each simulation design. This area is right at the center of the scene and players would spawn around it.

  • Static area: this would be the area which sets the narrative that ties all simulations together. It should look like a futuristic lab with an open ceiling. It would be static and the same for all simulations (with the potential of having some assets specific to each simulation).

  • Sky area: the area outside the spaceship would set the mood for the current topic. Thinking about the potential of having more topics in the future, it would showcase the location where the futuristic lab has traveled: if the loaded topic is about studying the solar system, the sky will display the space; if the simulation is about studying cells, it would look like the inside of a cell.


This scene structure would allow us to focus on having the highest amount of detail and time spent in the interactive area, which is where the actual simulations would take place and the closest to the players. After that, the static area would also be pretty detailed but without spending as much time on it in comparison to the first area since it would be shared by all simulations. Finally, the sky area should be the area where we spend the less amount of time but would be the one that sets the mood in terms of lighting and overall feel for each topic.

Wireframe

I then created a basic wireframe keeping in mind the scene structure mentioned in the previous section.

SceneStructure.png

Validation

To make sure that things would work using the strategy mentioned on previous sections, I finally created a 3D scene using 3ds Max with the different areas that would compose the scene. After bringing this first model into UE4, trying it in VR and tweaking some of the elements' scale, I was ready to start working on the interactive prototype.

VRLookingInside_Futuristic_Lab2.jpg

Interactive Prototype

The first interactive prototype consisted of the main functionality needed to be able to complete the first simulation, in which the goal was to build a cell. The first iteration of the menu was triggered by looking at the palm of your hand and it would follow your hand as long as the palm was facing up:

The next step was to put those interactions into context by placing them in the first mock-up of the futuristic lab, as well as iterate on the UI of the hand menu:

More on the prototyping phase coming soon...

User testing

The user testing for this project has been happening continuously and has allowed us to quickly iterate based on the feedback received. During the process, we got feedback from subject matter experts, teachers and other designers and developers from the NYU community not directly involved with the project.

The Outcome

Here are some demos of the final simulations and interactions showcasing most of the main elements and features:

Promotional Video

Multiplayer Demo (Spectator View)