Crisis Response Simulation — A Training and Safety App
At a glance
A first-person desktop simulation that teaches and tests crisis response protocol during an active shooter event, inside a real building layout.
The core engineering challenge was balancing realistic spatial affordances (rooms, doors, hiding spaces, line-of-sight risk) with performance constraints and ethical depiction requirements. We shipped a short, replayable training loop with clear feedback, fast protocol access, and iterative improvements based on user testing.
Role: Software Developer (environment import/optimization, scripting, UI); testing and documentation.
Tools: Unity (C#), SketchUp → Unity, GitHub, Unity UI/Canvas, spatial audio
Timeline: Jan 2023 – May 2023
Team: Awais Abid, Christos Koumpotis, supervised by Dr. Tabitha Peck

Problem
Emergency protocol knowledge often breaks down under stress. We wanted to create a simulation that makes the protocol memorable and actionable by placing the user inside a realistic environment, forcing quick decisions, and providing immediate feedback.
Key requirement: the experience needed to be non-triggering to depict, fast to understand, and light enough to run on constrained machines.
Solution
A short (≈2-minute) first-person training simulation with:
- A real building layout with all major hiding spaces and door interactions.
- An abstract “threat field” representation (non-graphic) that creates urgency without violent depiction.
- On-demand protocol access (quick reference overlay) without breaking flow.
- Clear success/failure feedback that is readable and immediate.
My role spanned across the full pipeline, with primary responsibility for:
- importing and optimizing the building model,
- implementing interaction scripts and UI,
- packaging the Windows build and writing documentation.
We collaboratively researched protocols, consulted campus stakeholders, ran user tests, and iterated the design based on feedback.
System design
Environment pipeline and performance budget
We modeled the building in SketchUp from floor plans, then imported into Unity. The model originally included a lot of structural detail, but the project’s real requirement was: keep all meaningful hiding affordances while reducing geometry enough to run smoothly.
Outcome: we simplified non-essential geometry (e.g., wall complexity in restrooms) to remove lag on lower-spec machines while preserving the spaces players need for the simulation to remain realistic.
Interaction and event architecture
We designed the simulation as a controlled loop:
- exploration → trigger event → react → hide/secure → survive/fail
Under the hood, this was implemented using:
- proximity-based interaction prompts (only visible when relevant),
- trigger volumes to drive timing and scenario progression,
- a simple state model for UI visibility and outcome messages.
This approach kept the code and logic deterministic and easy to debug during iteration.
UX/UI: clarity under stress
The UI goal was “minimal but decisive”:
- Instructions show only when contextually relevant, to prevent overload.
- Outcome feedback uses color/contrast cues so users can quickly interpret what happened.
- The protocol reference is a short, closable overlay with actionable statements so users don’t need to pause or read long text.
A key design constraint was to keep the interface readable and non-distracting while the user is moving.
Threat representation (ethical + functional)
We avoided any violent or avatar-based depiction by representing the shooter as an expanding field / line-of-sight risk—a functional abstraction that:
- preserves the “you are in danger if seen” mechanic,
- avoids triggering imagery,
- avoids biased or discriminatory character representation.
“Feel” and feedback tuning
To make the simulation legible and embodied (without adding complexity), we iterated on sensory feedback:
- player speed tied to motion/blur cues,
- camera field-of-view used to reinforce sense of space,
- spatial audio cues used to communicate distance/approach of danger.
This was one of the most important parts of turning “a working prototype” into “an understandable experience.”
Testing and iteration
We tested with ~10 classmates/campus students and used feedback to tighten usability:
Example changes based on testing
- Users wanted clearer outcome feedback → we added colored text cues for success/fail states.
- Users needed protocol access without breaking momentum → we made the protocol overlay short, scannable, and closable.
- Users reported fewer restarts once protocol recall became easier (less friction, more continuity).
Results
What we delivered is a complete, replayable desktop simulation with:
- a realistic navigable environment,
- controlled interaction logic,
- minimal UI that supports recall and comprehension,
- a safe abstraction for sensitive content,
- and a set of iteration-driven UX improvements backed by user tests.
What I’d improve next
If I extended this project, I would:
- add a lightweight analytics log (attempt count, time-to-hide, common failure points),
- add difficulty settings (threat speed, visibility radius, time-to-survive),
- improve interaction affordances (door feedback, input hints, accessibility options).