Control of robotic swarms through control over a leader(s) has become the dominant approach to supervisory control over these largely autonomous systems. Resilience in the face of attrition is one of the primary advantages attributed to swarms yet the presence of leader(s) makes them vulnerable to decapitation. Algorithms which allow a swarm to hide its leader are a promising solution. In prior work we found that using a graph neural network, GNN, a swarm could be trained to flock following a leader. An AdversaryNN trained to identify that leader (naïve condition) performed substantially better than human observers. When the swarm was trained to hide its leader (deception conditions), however, the advantage reversed with humans outperforming the Adversary. This human advantage persisted even when the swarm and Adversarywere jointly trained, allowing the Adversaryto adapt to the swarm’s evolving strategies for hiding its leader. The present study investigates the robustness of human leader identification by testing identifications made in the presence of medium and high levels of visual clutter. Clutter degraded human performance to some extent but human accuracy in leader identification remained well above that of the Adversaryin deception conditions. Human performance even approached that for an unhidden leader under joint training. This study confirms the robustness of the human superiority effect and argues for the inclusion of humans in AI systems which may confront learned deception.