top of page

MR Looking Inside

Icon_purple_512.png

Get ready to engage with 3D, interactive cells! This collaborative Mixed Reality (MR) experience will increase students’ engagement and excitement for STEM learning, bring out their naturally inquisitive natures, and improve their outcomes with these subjects.

​

Year: 2019-2020

Tools

Illustrator

Photoshop

Figma

Premiere Pro

3ds Max

Unity

Areas of focus

Art Direction

Product Design

UI/UX

3D Design

Prototyping

User Testing

Development

Video Production

Platform

iPhone

(MIRA Prism)

Context

The Verizon Foundation, in partnership with NYC Media Lab, announces the ten winners of the Verizon 5G EdTech Challenge. From AR and VR experiences, to machine learning, AI and mixed reality, projects were chosen for their ability to solve for student engagement, teacher preparedness and special needs support. We were one of the ten winning organizations that received funding, got access to Verizon’s 5G Labs and support from Verizon engineers and mentors to enhance our solution.

Verizon-Innovative-Learning-White-and-Re

Goal

Looking Inside is a 5G-enabled mixed-reality learning experience for small groups of students. The goal is to enhance science learning, especially for students in under-served communities, by giving teachers exciting and motivating inquiry-based learning experiences that apply validated design principles of effective learning. As the first topics, we chose middle school cell biology. I was in charge of designing and developing all interactions and elements within the simulations based on the learning design documents.

arclassroom.png

Design Process

Users

First of all, we defined the different types of user that would be using the experience:

​

  1. Students: they will be the main users of the simulations and workbooks, collaborating with each other within the experience.

  2. Teachers: they will be the ones setting up the environment and guiding the experience.

Scenarios

The simulations will be used as a complement to the current learning materials for the different lessons. They should help students recall and retain the content they have already learned, so two possible scenarios would be:

​

  1. Using the simulations to assess their knowledge instead of the traditional paper based exams.

  2. Helping students to better understand the concepts learned and study for the lesson exam.

Technology

The main technology chosen for this project was the MIRA Labs AR headset, mainly because:

​

  1. It supports eye contact

  2. It's affordable

​

We also had a camera connected to a computer that handled the recognition of physical manipulatives (cards), which is what made this experience a Mixed Reality experience. Users would be looking at an augmented world while picking up and moving physical objects that ground them in their shared physical space.

IMG_20190926_161630.jpg
arrow_edited_edited.png
IMG_20190926_161650.jpg

Constraints

The design process of this application was quite different in comparison to other multimedia projects I've worked on. The technology being used was really new and being developed as we were working on the project, so we had to design the experience keeping in mind the hardware and its changing capabilities. Some of the constraints to keep in mind while designing the simulations were:

​

  1. Only 1 controller: the MIRA Prim only has one controller at the moment.

  2. 3 DoF: there is no positional tracking of the controller or headset (cellphone), only their rotation is being tracked.

  3. Unstable tracking: there is a tracking solution using image recognition but it's quite unstable and it depends a lot on the environment lighting. It also requires the tracking mat to be visible from the iPhone camera on the headset at all times, which reduces the space where content and interactions can happen.

  4. Device specific: the headset only supports certain iPhone models.

  5. High contrast required: because the technology uses the projected light of the phone screen on a transparent plastic, elements need to have high contrast and the background on the 3D scene has to be completely black. This also means that if the physical room lighting where the application is being used is too intense, it affects the sharpness of the experience.

  6. External camera: when I joined the project, we were using the camera of an external PC to handle the recognition of the physical cards to spawn organelles, which meant a quite complex setup to run the experience.

Proposed Solutions

In order to compensate for some of the constraints mentioned on the previous section, these are some of the proposed solutions:

​

  1. 2D interactions: to allow for the movement of organelles around the scene freely while only having the rotation of the controller, I tried to design a solution where everything that entailed moving elements with the controller and the interactions with those would happen in a 2D plane.

  2. Tracking optional: to solve the issues with the tracking, I found that it could be disabled, so a possibility was to add the option of playing without tracking in an initial menu. At the same time, the whole scene had to be designed keeping in mind the field of view that the tracking-enabled version allowed for.

  3. Keep it light: since the supported phones are not the latest models, we want to design things as simple as possible, with just the necessary components and complexity.

  4. Use cell phone camera: after doing some tests, I managed to make the cell phone camera work and detect the cards that triggered the spawning of the organelles from the computer camera before that. This was huge because then the setup of the experience only required the headset, which would allow the teachers using it to save a lot of setup time.

Wireframe

I then created a very basic wireframe keeping in mind both the optional tracking and 2D interactions proposed solutions. For that, all interactions had to happen around a centered position that would match the tracking mat position in the real world, and everything had to be placed on the same plane so the spawned organelles could be moved around intuitively and interact with the main elements in the scene.

The 3 main elements that would compose the scene were:

​

  1. Space where cell is built (Tracking mat)

  2. Positions where cell parts (organelles) are spawned

  3. Positions where unnecessary organelles can be deleted

LookingInsideWireframe.jpg

Validation

To make sure that things would work the way they were supposed to before creating the interactive prototype, I finally created a 3D scene using 3ds Max with all the elements that would compose the simulation. That facilitated the positioning and scaling of elements that confirmed that this disposition should work with the technology we were using.

LookingInside2.jpg

Interactive Prototype

The first interactive prototype consisted of all the necessary functionalities that would go on the simulation, but it was tested in the Unity environment directly because of the time it would take to build for the iPhone. It was better to quickly iterate in Unity directly and once things were working and looked good, start testing using the actual headset. Here's a video of the first tests in Unity:

The next step was to test all the main interactions using the headset and controller. After doing so there were some issues that arose:

​

  1. The organelles would easily get lost on the infinite scene with the current controller interaction.

  2. The Done button was not reachable once the cell had been built.

  3. It was hard to tell if you were hovering over an organelle or which one was it.

  4. The current build was dependent on the server being up and running without issues.

​

The solutions implemented were the following:

​

  1. Smooth out the controller interactions with organelles and set boundaries so organelles can't leave the main play area

  2. Moved the Done button to the top of the interaction area

  3. Created a tag with the organelle name that appears when hovering over an organelle

  4. Created a single player mode that didn't depend on the server to run. To easily allow for the selection of those parameters I also created a user interface that would appear at the beginning of the experience on the phone:

LookingInsideFirstUI.jpg
arrow_edited_edited.png
LookingInsideFirstUI_2.jpg

The next big iteration of the project was to add the previously shown UI to the 3D environment as well as to improve the visuals for it. This allowed the users to wear the headset from the beginning of the experience so they could start getting familiar with the headset and controller while navigating the start menu. The previous iteration was quite confusing because they had to interact with a UI on the iPhone that was on the headset directly; from the outside, the hardware looks like a headset with a controller so it didn't make sense to them to be interacting with the device directly, the first thing everyone would do was to put on the headset, which meant we had to start the application from their forehead.

LookingInsideLastUI.jpg
arrow_edited_edited_edited.png

We ran many times into the issue of not having the phone connected to the wifi and having trouble when trying to play online, without knowing what was the issue at the time. So I decided to add an indicator that shows if the phone is connected to the internet and can be played online.

LookingInsideLastUI_2.jpg
LookingInsideFinalUI.jpg
arrow_edited_edited_edited.png

User testing

The user testing for this project has been happening continuously and has allowed us to quickly iterate based on the feedback received. During the process, we had multiple showcases to different events organized by Verizon so we got feedback from subject matter experts, teachers, students and even other designers and developers along the way.

The Outcome

Here's a demo of the final application and interactions with all the elements and features working:

bottom of page