Retention and Transfer of Procedural Tasks in Virtual Reality

Role:

Learning Experience Designer

Duration:

4 weeks

Research Approach:

Mixed-Methods

Techniques Used:

Interviewing, Contextual Enquiry

Data Collected:

[TO BE DECIDED]

Tools Used:

Articulate 360, Figma, Unity for VR, Meta Quest 3s, Adobe Premiere Pro

Team:

Just Me

Project Type:

Master's Thesis

Key Metric:

[TO BE DECIDED]

Too Long? Here's What I Did:

I'm currently developing an academic project integrating Virtual Reality with Articulate Storyline to validate a blended learning approach. The goal is to increase the learnability and knowledge retention of repairing a Tesla Vehicle. The target audience is body shop repair technicians.

[IN-PROGRESS - Click the project links below]

  1. Micro-learning course in Storyline and Rise —> LINK

  2. The above link contains an HTML embed of the VR exploration walk through that I built in Spatial VR

  3. The learner also performs tasks in a game I'm building on Unity —> GitHub LINK

What I’ve Been Working On

I'm currently developing an academic project integrating Virtual Reality with Articulate Storyline to validate a blended learning approach. The goal is to increase the learnability and knowledge retention of repairing a Tesla Vehicle. The target audience is body shop repair technicians.

The solution design is IN-PROGRESS. I'm documenting my process as I continue to work on this. STAY TUNED FOR MORE DETAILS

Background

Body shop technicians repairing a Tesla vehicle after a collision perform complex tasks involving the vehicle’s mechanical, structural, and electrical components. Tesla provides repair procedures that are publicly available on its website. Technicians refer to these procedures to guide the repair process.

Challenge and Focus Area

Collision centers not affiliated with Tesla often rely on online courses or reference Tesla’s service procedures available on the website. From my discovery phase (which will be documented), I found that technicians often struggle to retain procedural information in their motor memory. Strengthening knowledge retention and transferring those skills to real-world repair tasks is critical, as it directly impacts their productivity and shop performance.

My focus area is the rear glass removal process of a Tesla vehicle. I am using publicly available repair procedure information, and the proposed solution will be tested with six technicians (who are not affiliated with Tesla)

Testing the Solution [TO BE DECIDED]

There will be two measurement criteria: (a) to measure the effectiveness of the instructional content, and (b) to measure the usability of the system. System usability and usefulness will be measured using a system usability score and self-reported satisfaction scores on a 5-point Likert scale. Training effectiveness will be measured by the learner’s score from knowledge check assessments. Independent variables are (a) the e-learning medium, (b) the VR environment.

Dependent Variables

Metric

Data Type

Method

Frequency

Self-Paced Training Effectiveness (Using Articulate Storyline/Rise Module)

Knowledge Retention

Score (%)

Knowledge Check (e-learning module)

Pretest & Postest

 

Completion

Minutes

Timer

During the task

 

SUS Score

Continuous

 System Usability Scale Survey

After the task

VR Task Performance

Task Load Index

Continuous

Nasa TLX

After the task

 

Satisfaction Score

Continuous

5-point Likert Scale

After the task

Solution Design

Note: I will be updating the discovery and analysis phase soon!

Task breakdown
User story Breakdown for the e-learning module

The user stories are decomposed in the following fashion:

  • User Story: E-Learning requirements from a user perspective.

  • Core content and metadata: This refers to the textual, video, and image elements provided in the course.

  • CTAs and Interactive elements: This refers to all the interactive elements.

User story Breakdown for the Virtual Reality Walkthrough

The user stories are decomposed as follows:

  • Interactive Objects: Objects that are interactable in the environment.

  • Controller Input: Input buttons of the controller that are mapped to interactions.

  • Canvas and Directional cues: Instructional text in the form of a dialog box and visual cues.

  • Interactor: These include grabbable interactors, socket (placing an item in the intended position), and direct interactors. Interactors are scripts responsible for facilitating interactions.

  • Audio Feedback: Feedback for completing tasks and triggering UI changes via CTA buttons.

Watch this Short Video of my Working File

I added a music to emulate a body shop's ambience.