Case Study: Evaluating Precision and Spatial Perception in VR-Based Surgical Training
Role: UX Researcher (Human Factors)
Focus: Usability Testing, Usability Evaluation, Precision Analysis
Tools: Blender, Python, Shapr3D

Problem Space
Surgical accuracy, especially during high-force tasks like hip implant hammering, is critical. Traditional VR training systems lack validation for realistic force feedback and spatial precision, particularly in high-impact scenarios.
This project explored: Can users perform accurate, real-world hammering actions based only on VR visual feedback?
Research Objective
- Measure and compare hammering precision in VR vs. real conditions.
- Evaluate how hammer weight and target position impact spatial accuracy.
- Identify perceptual distortions and their implications for VR-based surgical training.
Real vs. VR


Study Design & Methodology
Experimental Setup
- Built a physical and VR environment with 6 target zones.
- Designed custom hammers (300g, 600g, 900g), each fitted with a needle tip and VR tracker.
- Simulated hammering on targets filled with kinetic sand for safety.
Studies Conducted
- Hammer weight study: Tested precision using hammers of three different weights.
- Target position study: Tested accuracy across six spatially varied target zones.
VR Environment

Real Environment

EACH PARTICIPANT COMPLETED TASKS IN BOTH VR AND REAL ENVIRONMENTS, ALLOWING WITHIN-SUBJECT COMPARISON.
Hammer Weight Study

Designed custom hammers (300g, 600g, 900g) with needle tips and VR trackers.
Target Position Study

Built physical and VR environments with six target zones.
- Real-time positional data captured using Unity + HTC Vive + Vive Tracker.
- Developed a custom Python + OpenCV tool to semi-automatically evaluate millimeter-level accuracy on paper target sheets.
- Applied MANOVA and descriptive statistics to analyze spatial results.
Data Collection & Analysis
Improving Evaluation Efficiency with a Custom Python Tool
As part of the study, we needed to analyze thousands of hammer strikes on paper target sheets to measure spatial precision. Originally, our team planned to do this manually, requiring:
- Careful alignment of each target
- Measuring X and Y coordinates twice per hit
- Transcribing data into spreadsheets
Manual time per sheet: ~10 minutes.
With 33 participants and multiple sheets each, this would have taken over 40 hours total.
My UX-Informed Solution
TO SAVE TIME AND REDUCE ERROR, I PROPOSED AND BUILT A SEMI-AUTOMATED EVALUATION TOOL USING PYTHON AND OPENCV, DESIGNED WITH USABILITY, SPEED, AND CONTROL IN MIND.
NEW TIME PER SHEET: ~30 SECONDS
TIME SAVED: OVER 90%
A VIDEO DEMONSTRATION OF HOW THE TOOLS WORKS
Key Findings
- Accuracy drops in VR: Users consistently hit too low and right, showing spatial distortion.
- Hammer weight matters: Medium-weight hammers led to the most imprecise strikes, especially vertically.
- Target posistion impacts precision: Side and lower targets were harder to hit in VR
- Real-world Risk: 5.8% of VR hits missed the targest by over 30mm, enough to miss a surgical tool