Federated learning (FL) enables wireless terminals to collaboratively learn a shared parameter model while keeping all the training data on devices per se. Whatever parameter sharing is applied, the learning model shall adapt to distinct network architectures because an improper learning model will deteriorate learning performance and, even worse, lead to model divergence, especially for the asynchronous transmission in resource-limited distributed networks. To address this issue, this paper proposes a decentralized learning model and develops an asynchronous parameter-sharing algorithm for resource-limited distributed Internet of Things (IoT) networks. It can improve learning efficiency and realize efficient communication. By jointly accounting for the convergence bound of federated learning and the transmission delay of wireless communications, we develop a node scheduling and bandwidth allocation algorithm to improve the learning performance. Extensive simulation results corroborate the effectiveness of the distributed algorithm in terms of fast learning model convergence and low transmission delay.