Urban Air Mobility (UAM) expands vehicles from the ground to the near-ground space, envisioned as a revolution for transportation systems. Comprehensive scene perception is the foundation for Autonomous Air Vehicles (AAV). However, AAV encounters a primary perception challenge: three-dimensional piloting makes the visual perception of AAVs easily obstructed by skyscrapers in urban. High perception learning requirements conflict with the view-limited visual information. To overcome the challenge, multi-view learning has been proposed to collect multi-view data to train the onboard deep learning model. But traditional multi-view learning is deployed on a single device operating centrally, which is difficult to deploy in dynamic environments. Accordingly, this paper proposes Graph Convolutional Network (GCN) based Distributed Multi-View learning (GCNDMV), taking account of GCN relation extractability to facilitate single-view representation learning integration. The proposed distributed multi-view learning framework allows distinct single-view representation learning integration. Moreover, due to the diversity gain of different single-view learning, various single-view representation learning of the GCN-DMV outperforms a homogeneous single-view representation learning of GCN-DMV in terms of recognition accuracy. Simulation experiments are conducted over a realistic multi-view dataset to verify the efficiency of the distributed multi-view learning framework.