PERCEPT — Haptic Flight Simulation
At a glance
A Unity + C# flight simulation with Novint Falcon force feedback, designed to teach elementary school kids physics concepts through guided, measurable tasks.
Role: Software Developer | DRIVE Lab, Davidson College
Tech: Unity, C#, Novint Falcon device API (haptics I/O)
Timeline: Summers 2022 & 2023
Team: Multi‑university research collaboration (faculty stakeholders across institutions): led by Dr. Tabitha Peck, Dr. David Borland, and Dr. James Minogue
Platform: Windows desktop executable (portable builds)

Problem
Physics learners often struggle to connect equations to felt, intuitive cause‑and‑effect.
PERCEPT explored whether haptically enabled science simulations can improve understanding of foundational concepts like:
- Newton’s Laws
- balanced vs unbalanced forces
- the four forces of flight (lift, weight, thrust, drag)
The research team also cared about experimental conditions, such as:
- visual‑only vs visual + haptics
- first‑person vs third‑person viewpoint
My job was to turn those research goals into a reliable, testable software experience.
Solution
A structured learning experience (not a sandbox)
I built a flight simulation that teaches one concept at a time through a 5‑stage progression:
- Takeoff (thrust controlled; AoA auto‑adjusts only here)
- Altitude targeting via thrust (stability gate: hold the target zone for 5 seconds)
- Altitude targeting via angle of attack (AoA) (hold 5 seconds)
- Combined control (thrust + AoA together)
- Landing challenge (constrained success criteria: land within 800m, within a vertical velocity range, 5 attempts)
Each stage is intentionally measurable so learning outcomes can be evaluated.
Two perspectives, same curriculum
Users can choose first‑person or third‑person from the main menu. Both modes share the same task progression so researchers can compare outcomes.
My role
I served as the primary developer across the full development loop:
- Simulation + physics implementation (lift/drag/forces computed from formulas)
- Haptics integration (Falcon input + output; force‑feedback mapping and tuning)
- UX engineering (HUD readability, progress indicators, teachable visuals)
- State + task system (curriculum progression, gating, failure/retry logic)
- Telemetry (session + task metrics for research evaluation)
- Performance optimization (low‑end machines, reduced overhead)
- Packaging + delivery (portable Windows builds; handoff files)
- Documentation (manual and support docs)
I also participated in weekly stakeholder meetings to demo builds, gather feedback, and plan iterations, following an Agile workflow approach.
What I shipped:
- A complete interactive simulation (first‑person + third‑person)
- 5‑stage guided curriculum + task gating
- Haptics integration (input + force feedback output)
- Telemetry logging for research evaluation
- Performance optimizations for low‑end hardware
- Documentation/manual + project handoff materials
Delivery cadence: 20+ builds delivered across iterative testing
System design (how it works)
Task progression as a state machine
The sim’s progression is implemented as an explicit, inspectable task system:
- Each stage has a set of targets (e.g., altitude bands)
- Advancing requires measurable success (e.g., time‑in‑zone)
- Landing is a special‑case stage with unique constraints and retry logic
Telemetry built into the experience
I instrumented the simulation to log research‑relevant metrics such as:
- timestamps
- altitude
- time‑in‑zone (stability)
- retries / attempts
- time‑to‑finish
- time spent per task
This allowed the research team to evaluate both learning outcomes and where users struggled.
A “never‑ending” world
To keep the focus on learning (not map edges), I implemented a looping environment:
- terrain + clouds recycle ahead of the player
- assets are reused to keep performance stable
Haptics integration (Novint Falcon)
Input
The Falcon’s four function keys form a D‑pad style layout. I mapped them to mirror arrow‑key controls:
- Left/Right → thrust
- Up/Down → angle of attack (AoA)
This created parity between keyboard mode and Falcon mode and made the control scheme easier to learn.

Output (force feedback)
I used the Falcon device API and wrote bridging functions that translate simulation forces into haptic output.
Key engineering work here:
- mapping forces/AoA to device axes
- tuning sensitivity and smoothing
- iterating on feel based on pilot feedback
I also added the ability to toggle haptics, which was useful for testing and for research conditions.
Testing what changed because of feedback
We tested with students, preservice educators, and lab/faculty collaborators. The most meaningful improvements came from making the experience legible, learnable, and stable.
1) Progress was unclear → I made success criteria visible
Problem: Users didn’t know how far they’d flown, how far remained, or how long they needed to stay in a target zone.
Changes:
- Added a distance progress indicator during landing
- Added an in‑zone progress/timer indicator so “hold for N seconds” wasn’t a guess
Why it mattered: When users can see the goal, they spend effort on force control rather than UI interpretation.
2) Tasks were too hard → I tuned the scaffold, not just the physics
Problem: Early versions required too much precision too soon.
Changes:
- Reduced required hold time (while keeping a measurable stability gate)
- Expanded the vertical range of zones to give users time to correct
Why it mattered: A learning simulation is only effective if most learners can reach “productive struggle” instead of frustration.
3) AoA was either invisible or unrealistic → I balanced realism with teachability
Problem: Realistic AoA changes were too subtle; direct mapping looked unnatural.
Changes:
- Tuned the plane’s visual rotation for “noticeable but plausible”
- Added a HUD indicator for the actual AoA value
Why it mattered: This made cause‑and‑effect perceivable without turning the simulation into an arcade game.
4) Haptics fought the user → I built a smoothing/mapping layer
Problem: Force feedback was initially too aggressive.
Changes:
- Adjusted force→device output mapping
- Tuned sensitivity + smoothing based on pilot sessions
Why it mattered: Haptics should reinforce the concept, not become a distraction.
5) Low‑end computers struggled → I optimized for reliability
Problem: The simulation lagged on older school hardware.
Changes (examples):
- Reduced per‑frame overhead through cleaner state handling
- Optimized Update/FixedUpdate usage
- Simplified assets and reused environment segments
Why it mattered: Stable performance is part of UX — especially in classroom contexts.
Results & impact
- Delivered a complete, research‑ready simulation with haptics support and multiple test conditions.
- Shipped 20+ iterative builds across two summers, guided by usability/pilot feedback.
- Produced a measurable experience with built‑in telemetry designed for research trials.
- Authored documentation + handoff materials so the project could be used and maintained beyond my involvement.
What I’d do next
If I were continuing PERCEPT today:
- Data‑driven curriculum: move stage/target parameters to JSON/config so researchers can edit tasks without touching code.
- Richer telemetry summaries: auto‑generate per‑session reports (time per task, retries, stability curves, landing quality).
- Haptics loop refinement: further separate device update requirements from render/UX where possible.