Kennesaw State University researcher Hansol Rheem is examining how virtual reality and robotic teammates can help train emergency responders, according to a statement released on Mar. 11. The study focuses on improving teamwork between humans and artificial intelligence in high-pressure situations such as mass-casualty events.
The research is significant because it explores not only the effectiveness of technology in training but also the psychological factors that influence how people interact with AI systems. Understanding these dynamics could lead to better preparation for professionals who may work alongside intelligent machines in critical scenarios.
Rheem, an assistant professor of psychology, said, “Mass casualty triage has been at the center of interest in the human factors community because of its unique situation. These situations are complex and spontaneous, so effective planning and training are critical.” Traditional training methods include lecture-based instruction or live simulations with volunteers acting as victims. Rheem noted, “Lecture-based training is efficient, but it’s not as interactive or engaging as it should be. Live simulations are more realistic and effective, but they take a lot of time and money to set up.”
To address these challenges, Rheem’s team developed a video game simulating a mass casualty event where participants make triage decisions with assistance from a robot teammate. Participants were divided into three groups: one believed the robot was controlled by a human (observer group), another thought it was an autonomous collaborator (collaborator group), and the third saw it as a competitor (competitor group). The study found that those who believed the robot was operated by a human showed greater learning gains and attributed their success to the robot. In contrast, participants who viewed the robot as an autonomous collaborator were more likely to blame failures on it.
Rheem explained that trust plays a key role in how people interact with AI: “When we believe the robot is controlled by a human, we may set lower expectations than we would for an autonomous robot. Expectations for an autonomous robot can sometimes be unrealistic… But when those expectations are not met… trust in the AI can drop quickly and we may become more inclined to blame the AI and ignore its advice.”
Drey Bailey, a psychology junior involved in the project, said, “Working on this project allowed me to apply what I’ve learned in class to real research… It’s interesting to see how this study can open the door to other types of VR training designed to improve decision-making in high-anxiety, time-pressured situations.”
Looking ahead, Rheem said his team plans to expand their research by testing different methods that help learners develop appropriate levels of trust in AI systems. He added, “As AI becomes more integrated into society… If we can design training that strengthens both expertise and collaboration with AI, that’s a significant step forward… We need to be ready when that day comes.”



