Human-Robot Teaming study
2023-present

Introduction
Following our implementation of the spatial augmented reality (SAR) system on the scooter project [1], [2], [3], [4] as a means for operating a manipulator, we wanted to explore other applications of this system. We created implementations of this system to communicate intent and improve robot explainability for a manipulator [5] and a mobile robot [6].
Driven by a desire to understand how this type of intent communication can improve interactions between humans and mobile robots, we set out with the goal of designing a study to measure how using SAR to communicate a mobile robot’s intent can improve the throughput of a human-robot team.
Robot implementation
We implemented our SAR system on a mobile robot with several improvements over our prior work:
- Use of timed elastic band planner to enable an accurate path to be projected, taking into account the robot’s maximum linear and rotational acceleration
- Use of a wide-angle lens to expand the image
- Implementation of a method to correct barrel distortion caused by this lens
- Use of ProCamCalib [7], a method to automatically calibrate projector-camera systems faster and more accurately than our prior manual calibration method
We initially started with a Pioneer 3-DX mobile robot base, but found that we were limited by the robot’s payload capacity and wanted to mount the projector higher up, so we ended up using a Fetch mobile manipulator. The projector was mounted above its head, and the rendering of the projected image was handled by an NVIDIA Jetson AGX Xavier. A wide-angle photography lens was mounted in front of the front element of the Epson EF12 projector with a 3D-printed bracket. Power to the projector and computer is provided by a lithium-ion battery bank separate from Fetch’s onboard batteries, enabling a runtime of about 2 hours, with a power consumption of approximately 100 W.
Path projection demo
This video shows the robot’s intended path being projected onto the floor as a green line. The robot’s goal is shown as a larger green circle at the end of the path.
Study design
Our study intends to investigate how our system improves the throughput of a human-robot team compared to a control where the robot does not communicate its intent. We constructed a 2m x 4m workspace with a series of goals placed around the perimeter. The human and robot are assigned goals synchronously, intentionally creating certain types of path conflicts through the careful selection of goals. A smartphone app guides the human to each of the goals, and prompts them to scan a QR code at the goal.
![]() |
---|
The shared workspace, with goals around its periphery |
![]() | ![]() |
---|---|
A goal — lifting a flap reveals a QR code to be scanned with the smartphone app | The smartphone app, prompting the participant to scan the QR code at goal B |
A ROS node assigns goals to both the human and robot. We developed a smartphone app that queries a REST API integrated into the goal assignment node to assign goals to the user. Each completed goal is logged to a MongoDB database with its duration and goal ID, which allows us to calculate completion times for different types of interactions.
The execution of this study design remains future work as of the time of writing.