top of page

VR Looking Inside

IconVR_blue_512.png

Get ready to engage with 3D, interactive cells! This collaborative Virtual Reality (VR) experience will increase students’ engagement and excitement for STEM learning, bring out their naturally inquisitive natures, and improve their outcomes with these subjects.

​

Year: 2021-2022

Tools

Illustrator

Photoshop

Figma

Premiere Pro

3ds Max

Unreal Engine

Trello

Areas of focus

Art Direction

Project Management

Product Design

UI/UX

3D Design

Prototyping

User Testing

Development

Video Production

Platform

Verizon VILS (link)

Steam (link)

HTC Vive / Pro

Oculus Quest 1 & 2 / Rift S

Valve Index

PICO Neo 2

Context

This project started as a mixed-reality experience in which students would sit on a table in the same space and use physical manipulatives (cards) to spawn organelles in the augmented world. Because of it being a co-located experience in which groups of people would collaborate for relatively long periods of time among other reasons, we decided to pivot to a VR experience as soon as the COVID-19 pandemic started.

arclassroom.png

!COVID-19!

Goal

VR Looking Inside is a suite of VR simulations for STEM learning in k12 classrooms. All of the simulations are designed with the intention to facilitate collaborative, constructivist learning experiences for small student groups. Allowing students to explore interactive, 3D models and simulations together provides them with the opportunity to make mistakes and come to a solution through their collaborative dialogue. My role included overseeing the production team and making sure we were delivering functional and polished simulations within the defined deadlines. I was the one putting the final product together and shipping it to the platforms where schools would download it from.

Design Process

Even thought we had already gone through most of the design process for the Mixed Reality (MR) version of some of the simulations, we went through the whole process again, reassessing every aspect of the previous design, and changing anything that was needed to better adapt to the current technology and context.

Users

First of all, we reassessed the different types of user that would be joining the experience and how they'd be using it:

​

  1. Students: when in the same physical space, they will be the main users of the simulations and workbooks, collaborating with each other within the experience. They could also join as spectators using a computer or tablet from anywhere, and be part of the audience while the teacher is giving a lesson as an avatar virtually.

  2. Teachers: when in the same physical space, they will be the ones setting up the environment and facilitating the experience. They could also join the experience to give a lesson virtually as an avatar, interacting with the simulations directly while explaining the concepts to the students who joined as spectators.

Scenarios

The simulations will be used as a complement to the current learning materials for the different lessons. They should help students recall and retain the content they have already learned, so two possible scenarios would be:

​

  1. Using the simulations to give a lesson in any of the topics available.

  2. Helping students to better understand the concepts learned and study for the lesson exam.

  3. Using the simulations to assess their knowledge instead of the traditional paper based exams.

Technology

The biggest challenge was to adapt some of the MR mechanics, while coming up with new interactions that would take as much advantage as possible of VR, in a way that was user friendly and intuitive. This is why the first step was to create a hardware comparison, to know the differences between the 2 and be able to make decisions for the current hardware (VR), while understanding why certain decisions were made when designing for the previous hardware (MR).

MR (MIRA Prism)
VR (HTC Vive Pro)
3 DoF
6 DoF
1 Controller
2 Controllers
Headset camera can be accessed
Headset cameras can't be accessed
Scene position and visualization depends on the tracking mat
Fully interactive scene of at least 2m x 1.5m with full freedom of movement
Physical manipulatives can be used (cards)
Physical manipulatives can't be used
The main environment is the real world
The environment is completely virtual

Design Considerations

Because VR had been around for a while when we started the project, most of the constraints and best practices related to the medium had widely been discussed by many researchers and designers in the community. Here are some of the main considerations we took into account when designing the simulations:

​

  1. Camera and world movement: one of the main best practices we kept in mind when designing any feature in the simulations was that both the camera and the environment as a whole should not be moved, rotated or scaled under any condition; doing so could induce motion sickness and discomfort to users. Any transformation of the camera should happen naturally when the user moves the headset.

  2. Viewing zones: when positioning the main elements of the simulation on the scene we kept them for most of the time within a horizontal field of view (FOV) of around 70°, and a vertical FOV of around 50° and below eye level. The only few times in which we are moving elements of the simulation higher than that level is to catch users attention in certain WOW moments.

  3. Height range and arm length: something we had to be aware of when designing the simulations is that they would be used by middle school students as well as adults, which means that both height and arm length should be something to really consider when designing interactions and placing elements in the world.

  4. Text readability: since every simulation depended on text at some point for certain information to be delivered to the users, it was important to make sure that the text was of the right size and at the right distance to be readable.

  5. Body position: we tried to keep users' arms at a natural and comfortable position close to the body in a 90 degree angle for most of the time to avoid the so called problem of gorilla arms, in which users quickly get tired from having the arms extended for long periods of time.

  6. Multiplayer experience: probably one of the most challenging constraints was that the simulations could be played either in single player mode or together collaborating with other people. This affected the both the design of the space and interactions to accommodate for all the different scenarios.


Something else to keep in mind was that we had a small team, which meant we had to be strategic about what areas to focus on to achieve the best possible result and have a release ready application at the end of the process.

Proposed Solutions

In order to compensate for some of the challenges mentioned on the previous section, these are some of the proposed solutions we implemented initially:

​

  1. Fading in/out transitions: to create smooth transitions between scenes or states we used fade in and fade out effects instead of camera pans. We discussed that if any camera pan was needed to help with the learning of certain concepts, we would use a world UI with a 2D animation playing on it.

  2. Laser pointer: we tried to solve the mentioned challenges of body position and arm length by using laser pointers to interact with any elements in the scene. This method makes it so most people can interact with anything for long periods of time because it allows for a natural body position while doing so. It also helps keeping the multiplayer experience clean because without the laser pointer, everyone would have their hands in the simulation when interacting with it, being a distraction and potentially covering some of the elements for other players.

  3. Interactive text: some of the strategies we used to help with text readability were to only show certain text when the user would make an action like hover over and element or spawn a menu. This helps keeping the focus on the simulation itself by not having too much information at the same time on the user's FOV.

Prototype 1

The first interactive prototype consisted of the main functionalities needed to be able to complete the first simulation, in which the goal was to build a cell. While working on it, we kept in mind all the design considerations and solutions mentioned in the previous section.

 

The first iteration of the menu was triggered by looking at the palm of your hand and it would follow your hand as long as the palm was facing up:

The next step before we started testing the first prototype, was to put those interactions into context by placing them in the first mock-up of the environment that I created in 3ds Max, as well as iterate on the UI of the hand menu:

Prototype 2

After doing some testing sessions with our first prototype, we learned some valuable lessons:

​

  1. The floating hand menu was not comfortable to use for long periods of time.

  2. The current design of the simulation space wasn't compatible with the envisioned multiplayer mode, where users would spawn all around the center of the space.

​

To solve the issue with the center of the environment not being visible to all players around it, we decided to remove any element that didn't belong to the actual simulation and make the simulations as much symmetrical as possible, so they are equally visible from any perspective. At the same time, to be strategic about the team size and the areas of the scene we should focus on, I divided the environment into 3 areas:
 

  • Interactive area: this area would change its structure/arrangement completely based on the selected simulation. This was because each simulation would have different needs in terms of disposition of the elements and interactions. This area is right at the center of the scene and players would spawn around it.

  • Static area: this would be the area which sets the narrative that ties all simulations together. It should look like a futuristic lab with an open ceiling. It would be static and the same for all simulations (with the potential of having some assets specific to each simulation).

  • Sky area: the area outside the spaceship would set the mood for the current topic. Thinking about the potential of having more topics apart from cellular biology in the future, it would showcase the location where the futuristic lab has traveled: if the loaded topic is about studying the solar system, the sky will display the space; if the simulation is about studying cells, it would look like the inside of a cell.


This scene structure would allow us to focus on having the highest amount of detail and time spent in the interactive area, which is where the actual simulations would take place and the closest to the players. After that, the static area would also be pretty detailed but without spending as much time on it in comparison to the first area since it would be shared by all simulations. Finally, the sky area should be the area where we spend the less amount of time but would be the one that sets the mood in terms of lighting and overall feel for each topic.

SceneStructure.png

In order to fix the menu problem and also make a more scalable system that would be able to be used in other simulations, I started playing with the idea of a fully customizable system composed of a main menu with buttons that would open other sub-menus or panels. All of the menus and sub-menus would be draggable, to make the system adaptable to any height and arm length. This would also help keep the environment clean by separating the main interactive elements of the scene into two:

​

  1. The actual simulation content, composed mainly of 3D models, which is always visible and at the center of the scene.

  2. The menu system, which can be spawned/hidden as needed and can be moved around wherever the user prefers.

At the same time, we started prototyping and testing new ways of interacting with the simulation content at the center.

MVP

We spent some more time testing and iterating all features until we were able to complete the Minimum Viable Product (MVP), deploy it to our end users and start collecting data. For this first MVP we had a first version of the Build a Cell and Mitosis simulations, as well as the first version of the multiplayer mode.

​

Some of the most important new features that we added after the previous iterations and user testing were:

  1. We added a cursor at the end of the laser pointer to help be more aware of the depth in the scene and keep better track of the elements being hovered at every moment.

  2. Each new user joining a session has a different color to help identify who is who when working on a simulation together.

Because mitosis is a process, we adapted some of the functionality and its menu to better fit the learning content. Some of the new features and changes were:

  1. Added a percentage bar in the menu that shows the progress on the current phase of the simulation.

  2. Designed an interactive player to interact with the most complex parts of the phases, like complicated changes in shape or when certain elements dissolved.

  3. Designed an interactive sphere to be used to interact with more linear changes and processes.

  4. Designed a mechanic to duplicate elements by clicking and dragging an object in the scene.

UI

Here's a snapshot of some of the final components that I designed for the UI:

User testing

The user testing for this project has been happening continuously and has allowed us to quickly iterate based on the feedback received. During the process, we got feedback from subject matter experts, teachers and other designers and developers from the NYU community not directly involved with the project. Thanks to the collaboration with Verizon, we have also been able to test all simulations and features with the main target audience along the design process in multiple occasions.

The Outcome

Here are the final simulations showcasing all the main interactions and features:

Features Video

Multiplayer Demo

bottom of page