Asynchronously Assigning, Monitoring, and Managing Assembly Goals in Virtual Reality for High-Level Robot Teleoperation
- Resource Type
- Conference
- Authors
- Aoyama, Shutaro; Liu, Jen-Shuo; Wang, Portia; Jain, Shreeya; Wang, Xuezhen; Xu, Jingxi; Song, Shuran; Tversky, Barbara; Feiner, Steven
- Source
- 2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR) VR Virtual Reality and 3D User Interfaces (VR), 2024 IEEE Conference. :450-460 Mar, 2024
- Subject
- Computing and Processing
Visualization
Three-dimensional displays
Prototypes
Virtual reality
User interfaces
Spatiotemporal phenomena
Task analysis
Human-centered computing
Human computer interaction (HCI)
Interaction paradigms
Interaction design
Interaction design process and methods
User interface design
Computer systems organization
Embedded and cyber-physical systems Robotics
External interfaces for robotics
- Language
- ISSN
- 2642-5254
We present a prototype virtual reality user interface for robot teleoperation that supports high-level specification of 3D object positions and orientations in remote assembly tasks. Users interact with virtual replicas of task objects. They asynchronously assign multiple goals in the form of 6DoF destination poses without needing to be familiar with specific robots and their capabilities, and manage and monitor the execution of these goals. The user interface employs two different spatiotemporal visualizations for assigned goals: one represents all goals within the user’s workspace (Aggregated View), while the other depicts each goal within a separate world in miniature (Timeline View). We conducted a user study of the interface without the robot system to compare how these visualizations affect user efficiency and task load. The results show that while the Aggregated View helped the participants finish the task faster, the participants preferred the Timeline View.